jenkins
Appendix 1 Methods details
Two-body system initialization The trajectories are initialized in a near-circular way. Three-body system initialization For the chaotic three-body systems, we also apply initial condition regularization such that the initial trajectories of the system is also near-circular. Examples of the two-body systems and three-body systems we generated. We show a few examples of the two-body systems and three-body systems in Figure.A1. Similar as in Section.4.1, we show the latent space obtained from The recordings are collected from surgically implanted electrode arrays and are thresholded and spike sorted when collected.
DART-Vetter: A Deep LeARning Tool for automatic triage of exoplanet candidates
Fiscale, Stefano, Inno, Laura, Rotundi, Alessandra, Ciaramella, Angelo, Ferone, Alessio, Magliano, Christian, Cacciapuoti, Luca, Kostov, Veselin, Quintana, Elisa, Covone, Giovanni, Tomajoli, Maria Teresa Muscari, Saggese, Vito, Tonietti, Luca, Vanzanella, Antonio, Della Corte, Vincenzo
In the identification of new planetary candidates in transit surveys, the employment of Deep Learning models proved to be essential to efficiently analyse a continuously growing volume of photometric observations. To further improve the robustness of these models, it is necessary to exploit the complementarity of data collected from different transit surveys such as NASA's Kepler, Transiting Exoplanet Survey Satellite (TESS), and, in the near future, the ESA PLAnetary Transits and Oscillation of stars (PLATO) mission. In this work, we present a Deep Learning model, named DART-Vetter, able to distinguish planetary candidates (PC) from false positives signals (NPC) detected by any potential transiting survey. DART-Vetter is a Convolutional Neural Network that processes only the light curves folded on the period of the relative signal, featuring a simpler and more compact architecture with respect to other triaging and/or vetting models available in the literature. We trained and tested DART-Vetter on several dataset of publicly available and homogeneously labelled TESS and Kepler light curves in order to prove the effectiveness of our model. Despite its simplicity, DART-Vetter achieves highly competitive triaging performance, with a recall rate of 91% on an ensemble of TESS and Kepler data, when compared to Exominer and Astronet-Triage. Its compact, open source and easy to replicate architecture makes DART-Vetter a particularly useful tool for automatizing triaging procedures or assisting human vetters, showing a discrete generalization on TCEs with Multiple Event Statistic (MES) > 20 and orbital period < 50 days.
- Europe > Italy > Campania > Naples (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- North America > United States > Maryland > Prince George's County > Greenbelt (0.04)
- (5 more...)
- Government > Space Agency (0.34)
- Government > Regional Government > North America Government > United States Government (0.34)
Project Jenkins: Turning Monkey Neural Data into Robotic Arm Movement, and Back
Zahorodnii, Andrii, Yanovsky, Dima
Synthetic neural data generation and neuroprosthetic devices are active areas of research, sparked by advances in neuroscience and robotics [22, 4, 2, 15]. These fields have significant implications for brain-computer interfaces, rehabilitation, and simulation of brain dynamics for downstream tasks or gaining new understanding of the underlying neural mechanisms. In this project, which we call "Project Jenkins," we explore such decoding and encoding of neural data from a macaque monkey named Jenkins. We used a publicly available dataset [5] containing neural firing patterns from Jenkins' motor and premotor cortical areas during a center-outreach task. Generating synthetic neural activity enables researchers to test and refine decoding models without requiring continuous access to live neural recordings [12, 16], while neuroprosthetic advancements [18, 20, 21, 9, 7, 3, 8, 17] rely on robust encoding techniques to translate brain signals into precise motor commands. Our aim was two-fold (Figure 1, 2): Decoding: Translate neural spiking data into predicted velocities for a robotic arm. Encoding: Generate synthetic neural activity corresponding to an intended robotic movement. With this paper, we publish the developed open-source tools for both synthetic neural data generation and neural decoding, enabling researchers to replicate our methods and build upon them. Our full codebase and additional resources, including demonstration videos, can be found on the project's website: https://www.808robots.com/
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Europe > United Kingdom > England > Somerset > Bath (0.04)
'Hold on to your seats': how much will AI affect the art of film-making?
Last year, Rachel Antell, an archival producer for documentary films, started noticing AI-generated images mixed in with authentic photos. There are always holes or limitations in an archive; in one case, film-makers got around a shortage of images for a barely photographed 19th-century woman by using AI to generate what looked like old photos. Which brought up the question: should they? And if they did, what sort of transparency is required? The capability and availability of generative AI – the type that can produce text, images and video – have changed so rapidly, and the conversations around it have been so fraught, that film-makers' ability to use it far outpaces any consensus on how.
- Europe > France (0.06)
- North America > United States > New York (0.05)
- Europe > Russia > North Caucasian Federal District > Chechen Republic (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Machine learning enhances monitoring of threatened marbled murrelet
Machine learning analysis of data gathered by acoustic recording devices is a promising new tool for monitoring the marbled murrelet and other secretive, hard-to-study species, research by Oregon State University and the U.S. Forest Service has shown. The threatened marbled murrelet is an iconic Pacific Northwest seabird that's closely related to puffins and murres, but unlike those birds, murrelets raise their young as far as 60 miles inland in mature and old-growth forests. "There are very few species like it," said co-author Matt Betts of the OSU College of Forestry. "And there's no other bird that feeds in the ocean and travels such long distances to inland nest sites. This behavior is super unusual and it makes studying this bird really challenging."
- North America > United States > Oregon (0.31)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.06)
After 'The Last of Us,' Everything Will Be Transmedia
With a selection of creatives invited from across disciplines, the matter at hand would dominate popular culture for the next two decades: How could franchises expand beyond just one or two mediums? How could they achieve what EA's head of intellectual property development, Danny Bilson, called a "deepening of the universe"? That challenge and the book it inspired, Jenkins' Convergence Culture from 2006, turned out to be prophetic. At a time when movie attendance was on the rise, video games were hours long, and the internet was connecting everyone, Jenkins argued that media industries were missing a trick, and competing when they should've been collaborating. In response, he pitched a move into "transmedia storytelling"--a concept akin to the media mix in Japan at the time, where Pokémon dominated everything from anime to key rings. This would allow each medium to do what it does best, he wrote, "so that a story might be introduced in a film, expanded through television, novels, and comics, and its world might be explored and experienced through game play."
- Asia > Japan (0.26)
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.06)
- Media (1.00)
- Leisure & Entertainment > Games > Computer Games (1.00)
US launches artificial intelligence military use initiative - ABC News
The United States launched an initiative Thursday promoting international cooperation on the responsible use of artificial intelligence and autonomous weapons by militaries, seeking to impose order on an emerging technology that has the potential to change the way war is waged. "As a rapidly changing technology, we have an obligation to create strong norms of responsible behavior concerning military uses of AI and in a way that keeps in mind that applications of AI by militaries will undoubtedly change in the coming years," Bonnie Jenkins, the State Department's under secretary for arms control and international security, said. She said the U.S. political declaration, which contains non-legally binding guidelines outlining best practices for responsible military use of AI, "can be a focal point for international cooperation." Jenkins launched the declaration at the end of a two-day conference in The Hague that took on additional urgency as advances in drone technology amid the Russia's war in Ukraine have accelerated a trend that could soon bring the world's first fully autonomous fighting robots to the battlefield. The U.S. declaration has 12 points, including that military uses of AI are consistent with international law, and that states "maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment."
- North America > United States (0.57)
- Europe > Ukraine (0.35)
- Europe > Netherlands > South Holland > The Hague (0.32)
- (3 more...)
Continual learning on deployment pipelines for Machine Learning Systems
Following the development of digitization, a growing number of large Original Equipment Manufacturers (OEMs) are adapting computer vision or natural language processing in a wide range of applications such as anomaly detection and quality inspection in plants. Deployment of such a system is becoming an extremely important topic. Our work starts with the least-automated deployment technologies of machine learning systems includes several iterations of updates, and ends with a comparison of automated deployment techniques. The objective is, on the one hand, to compare the advantages and disadvantages of various technologies in theory and practice, so as to facilitate later adopters to avoid making the generalized mistakes when implementing actual use cases, and thereby choose a better strategy for their own enterprises. On the other hand, to raise awareness of the evaluation framework for the deployment of machine learning systems, to have more comprehensive and useful evaluation metrics (e.g. table 2), rather than only focusing on a single factor (e.g. company cost). This is especially important for decision-makers in the industry.
- Europe > United Kingdom > England > West Midlands > Birmingham (0.04)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Europe > Germany > North Rhine-Westphalia > Düsseldorf Region > Düsseldorf (0.04)
- (2 more...)
Manipulation-Oriented Object Perception in Clutter through Affordance Coordinate Frames
Chen, Xiaotong, Zheng, Kaizhi, Zeng, Zhen, Kisailus, Cameron, Basu, Shreshtha, Cooney, James, Pavlasek, Jana, Jenkins, Odest Chadwicke
In order to enable robust operation in unstructured environments, robots should be able to generalize manipulation actions to novel object instances. For example, to pour and serve a drink, a robot should be able to recognize novel containers which afford the task. Most importantly, robots should be able to manipulate these novel containers to fulfill the task. To achieve this, we aim to provide robust and generalized perception of object affordances and their associated manipulation poses for reliable manipulation. In this work, we combine the notions of affordance and category-level pose, and introduce the Affordance Coordinate Frame (ACF). With ACF, we represent each object class in terms of individual affordance parts and the compatibility between them, where each part is associated with a part category-level pose for robot manipulation. In our experiments, we demonstrate that ACF outperforms state-of-the-art methods for object detection, as well as category-level pose estimation for object parts. We further demonstrate the applicability of ACF to robot manipulation tasks through experiments in a simulated environment.