Goto

Collaborating Authors

 shepherd


25 years of research in space

MIT Technology Review

MIT astronauts aboard the International Space Station--and the MIT researchers who have sent up experiments--have advanced our understanding of science, space, and the universe. This image of the International Space Station and space shuttle Endeavour, flying at an altitude of approximately 350 kilometers, was taken by Expedition 27 crew member Paolo Nespoli from the Soyuz TMA-20 on May 24, 2011. On November 2, 2000, NASA astronaut Bill Shepherd, OCE '78, SM '78, and Russian cosmonauts Sergei Krikalev and Yuri Gidzenko made history as their Soyuz spacecraft docked with the International Space Station. The event marked the start of 25 years of continuous human presence in space aboard the ISS--a prolific period for space research. MIT-trained astronauts, scientists, and engineers have played integral roles in all aspects of the station's design, assembly, operations, and scientific research. One of MIT's most experienced NASA astronauts, Mike Fincke '89, is celebrating that milestone from space.


Knowledge Graph Sparsification for GNN-based Rare Disease Diagnosis

Cara, Premt, Zaripova, Kamilia, Bani-Harouni, David, Navab, Nassir, Farshad, Azade

arXiv.org Artificial Intelligence

Rare genetic disease diagnosis faces critical challenges: insufficient patient data, inaccessible full genome sequencing, and the immense number of possible causative genes. These limitations cause prolonged diagnostic journeys, inappropriate treatments, and critical delays, disproportionately affecting patients in resource-limited settings where diagnostic tools are scarce. We propose RareNet, a subgraph-based Graph Neural Network that requires only patient phenotypes to identify the most likely causal gene and retrieve focused patient subgraphs for targeted clinical investigation. RareNet can function as a standalone method or serve as a pre-processing or post-processing filter for other candidate gene prioritization methods, consistently enhancing their performance while potentially enabling explainable insights. Through comprehensive evaluation on two biomedical datasets, we demonstrate competitive and robust causal gene prediction and significant performance gains when integrated with other frameworks. By requiring only phenotypic data, which is readily available in any clinical setting, RareNet democratizes access to sophisticated genetic analysis, offering particular value for underserved populations lacking advanced genomic infrastructure.


AI can simulate a teacher, but it can't shepherd a soul

FOX News

Philosophy professor Dr. Susan Schneider joins'Fox & Friends First' to discuss the impact of artificial intelligence on students' performance in the classroom. Across America, education is changing at a pace few could have imagined even a decade ago. Artificial intelligence is being deployed to train machines to teach our children. School systems are embedding gender ideology and political agendas into their curriculum with little regard for parental input. At the same time, traditional values are being pushed to the margins, and our students are caught in the middle.

  Country: Europe > United Kingdom > England > Greater London > London (0.06)
  Industry: Education (1.00)

Dog goggles help scientists learn how to best get their attention

Popular Science

There are plenty of strategies to train your dog, but is there a particularly effective method to get your pet pal to pay attention to you? A team of scientists believes the most successful technique likely involves combining two tried-and-true signals--and they gathered data from canines strapped with eye-tracking headgear to back up their theory. Dog owners frequently try communicating with their pets by looking or pointing directly at an object, but a team at the University of Veterinary Medicine Vienna recently wondered if either method (or a combination of the two) worked best. Led by comparative cognition postdoctoral candidate Christoph Völter, researchers introduced various communication scenarios to dogs to learn the answer. To evaluate the best human-to-dog strategy, a researcher first sat on their knees with a bowl on either side of them, only one of which contained a concealed treat.


ShEPhERD: Diffusing shape, electrostatics, and pharmacophores for bioisosteric drug design

Adams, Keir, Abeywardane, Kento, Fromer, Jenna, Coley, Connor W.

arXiv.org Artificial Intelligence

Engineering molecules to exhibit precise 3D intermolecular interactions with their environment forms the basis of chemical design. In ligand-based drug design, bioisosteric analogues of known bioactive hits are often identified by virtually screening chemical libraries with shape, electrostatic, and pharmacophore similarity scoring functions. We instead hypothesize that a generative model which learns the joint distribution over 3D molecular structures and their interaction profiles may facilitate 3D interaction-aware chemical design. We specifically design ShEPhERD, an SE(3)-equivariant diffusion model which jointly diffuses/denoises 3D molecular graphs and representations of their shapes, electrostatic potential surfaces, and (directional) pharmacophores to/from Gaussian noise. Inspired by traditional ligand discovery, we compose 3D similarity scoring functions to assess ShEPhERD's ability to conditionally generate novel molecules with desired interaction profiles. We demonstrate ShEPhERD's potential for impact via exemplary drug design tasks including natural product ligand hopping, protein-blind bioactive hit diversification, and bioisosteric fragment merging.


This robot is being controlled by a King oyster mushroom

Popular Science

Sinister, brain-controlling mushrooms are a staple in sci-fi shows and literature. While brainwashed humans doing the bidding of fungi remains fantasy, researchers have now learned how to control a robot's movement using electrical signals produced by the mycelium of the common King oyster mushroom. This part machine, part fungus robot could one day serve as a building block for more advanced "biohybrid" chimeras that can remotely analyze agricultural fields for potentially harmful changes in soil chemistry. Researchers from Cornell University and University of Florence in Italy wanted to see if electrical signals pulsing through the mycelium of fungi could be translated into a controlling input for robots. The findings were published last month in the journal Science Robotics.


These robots move through the magic of mushrooms

Engadget

Researchers at Cornell University tapped into fungal mycelia to power a pair of proof-of-concept robots. Mycelia, the underground fungal network that can sprout mushrooms as its above-ground fruit, can sense light and chemical reactions and communicate through electrical signals. This makes it a novel component in hybrid robotics that could someday detect crop conditions otherwise invisible to humans. The Cornell researchers created two robots: a soft, spider-like one and a four-wheeled buggy. The researchers used mycelia's light-sensing abilities to control the machines using ultraviolet light.


Six Eerie Predictions That Early Sci-Fi Authors Got Completely Wrong

The New Yorker

Since the genre's inception, science-fiction writers have imagined what the future might hold for Earth and beyond. While their stories are often fantastical, many of them anticipated technologies that actually exist today, such as television and artificial intelligence. However, countless more made predictions that were absolute whiffs. While many sci-fi authors envisioned the possibilities of nuclear power, Philip K. Dick's "The Land That Time Remembered" got specifically stuck on the idea of a society where humans washed their hands with "soap dispensers powered by the almighty atom," and where "torrents of soap spurted forth by means of the forces that birthed the universe." Still cherished today, "Twenty Thousand Leagues Under the Sea" brought us Jules Verne's dreams of electric-powered submarines, tasers, and other technologies that were unheard of in 1870.


Short Film Dataset (SFD): A Benchmark for Story-Level Video Understanding

Ghermi, Ridouane, Wang, Xi, Kalogeiton, Vicky, Laptev, Ivan

arXiv.org Artificial Intelligence

Recent advances in vision-language models have significantly propelled video understanding. Existing datasets and tasks, however, have notable limitations. Most datasets are confined to short videos with limited events and narrow narratives. For example, datasets with instructional and egocentric videos often document the activities of one person in a single scene. Although some movie datasets offer richer content, they are often limited to short-term tasks, lack publicly available videos and frequently encounter data leakage given the use of movie forums and other resources in LLM training. To address the above limitations, we propose the Short Film Dataset (SFD) with 1,078 publicly available amateur movies, a wide variety of genres and minimal data leakage issues. SFD offers long-term story-oriented video tasks in the form of multiple-choice and open-ended question answering. Our extensive experiments emphasize the need for long-term reasoning to solve SFD tasks. Notably, we find strong signals in movie transcripts leading to the on-par performance of people and LLMs. We also show significantly lower performance of current models compared to people when using vision data alone.


Automatically designing robot swarms in environments populated by other robots: an experiment in robot shepherding

Ramos, David Garzón, Birattari, Mauro

arXiv.org Artificial Intelligence

Automatic design is a promising approach to realizing robot swarms. Given a mission to be performed by the swarm, an automatic method produces the required control software for the individual robots. Automatic design has concentrated on missions that a swarm can execute independently, interacting only with a static environment and without the involvement of other active entities. In this paper, we investigate the design of robot swarms that perform their mission by interacting with other robots that populate their environment. We frame our research within robot shepherding: the problem of using a small group of robots, the shepherds, to coordinate a relatively larger group, the sheep. In our study, the group of shepherds is the swarm that is automatically designed, and the sheep are pre-programmed robots that populate its environment. We use automatic modular design and neuroevolution to produce the control software for the swarm of shepherds to coordinate the sheep. We show that automatic design can leverage mission-specific interaction strategies to enable an effective coordination between the two groups.