Goto

Collaborating Authors

 spawn


Automated Coral Spawn Monitoring for Reef Restoration: The Coral Spawn and Larvae Imaging Camera System (CSLICS)

Tsai, Dorian, Brunner, Christopher A., Lamont, Riki, Nordborg, F. Mikaela, Severati, Andrea, Terry, Java, Jackel, Karen, Dunbabin, Matthew, Fischer, Tobias, Raine, Scarlett

arXiv.org Artificial Intelligence

Coral aquaculture for reef restoration requires accurate and continuous spawn counting for resource distribution and larval health monitoring, but current methods are labor-intensive and represent a critical bottleneck in the coral production pipeline. We propose the Coral Spawn and Larvae Imaging Camera System (CSLICS), which uses low cost modular cameras and object detectors trained using human-in-the-loop labeling approaches for automated spawn counting in larval rearing tanks. This paper details the system engineering, dataset collection, and computer vision techniques to detect, classify and count coral spawn. Experimental results from mass spawning events demonstrate an F1 score of 82.4\% for surface spawn detection at different embryogenesis stages, 65.3\% F1 score for sub-surface spawn detection, and a saving of 5,720 hours of labor per spawning event compared to manual sampling methods at the same frequency. Comparison of manual counts with CSLICS monitoring during a mass coral spawning event on the Great Barrier Reef demonstrates CSLICS' accurate measurement of fertilization success and sub-surface spawn counts. These findings enhance the coral aquaculture process and enable upscaling of coral reef restoration efforts to address climate change threats facing ecosystems like the Great Barrier Reef.


SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser

Prasad, Grusha, Linzen, Tal

arXiv.org Artificial Intelligence

Structural priming is a widely used psycholinguistic paradigm to study human sentence representations. In this work we propose a framework for using empirical priming patterns to build a theory characterizing the structural representations humans construct when processing sentences. This framework uses a new cognitively motivated parser, SPAWN, to generate quantitative priming predictions from theoretical syntax and evaluate these predictions with empirical human behavior. As a case study, we apply this framework to study reduced relative clause representations in English. We use SPAWN to generate priming predictions from two theoretical accounts which make different assumptions about the structure of relative clauses. We find that the predictions from only one of these theories (Participial-Phase) align with empirical priming patterns, thus highlighting which assumptions about relative clause better capture human sentence representations.


Relativistic Digital Twin: Bringing the IoT to the Future

Sciullo, Luca, De Marchi, Alberto, Trotta, Angelo, Montori, Federico, Bononi, Luciano, Di Felice, Marco

arXiv.org Artificial Intelligence

Complex IoT ecosystems often require the usage of Digital Twins (DTs) of their physical assets in order to perform predictive analytics and simulate what-if scenarios. DTs are able to replicate IoT devices and adapt over time to their behavioral changes. However, DTs in IoT are typically tailored to a specific use case, without the possibility to seamlessly adapt to different scenarios. Further, the fragmentation of IoT poses additional challenges on how to deploy DTs in heterogeneous scenarios characterized by the usage of multiple data formats and IoT network protocols. In this paper, we propose the Relativistic Digital Twin (RDT) framework, through which we automatically generate general-purpose DTs of IoT entities and tune their behavioral models over time by constantly observing their real counterparts. The framework relies on the object representation via the Web of Things (WoT), to offer a standardized interface to each of the IoT devices as well as to their DTs. To this purpose, we extended the W3C WoT standard in order to encompass the concept of behavioral model and define it in the Thing Description (TD) through a new vocabulary. Finally, we evaluated the RDT framework over two disjoint use cases to assess its correctness and learning performance, i.e., the DT of a simulated smart home scenario with the capability of forecasting the indoor temperature, and the DT of a real-world drone with the capability of forecasting its trajectory in an outdoor scenario. Experiments show that the generated DT can estimate the behavior of its real counterpart after an observation stage, regardless of the considered scenario.


GROOT: Learning to Follow Instructions by Watching Gameplay Videos

Cai, Shaofei, Zhang, Bowei, Wang, Zihao, Ma, Xiaojian, Liu, Anji, Liang, Yitao

arXiv.org Artificial Intelligence

We study the problem of building a controller that can follow open-ended instructions in open-world environments. We propose to follow reference videos as instructions, which offer expressive goal specifications while eliminating the need for expensive text-gameplay annotations. A new learning framework is derived to allow learning such instruction-following controllers from gameplay videos while producing a video instruction encoder that induces a structured goal space. We implement our agent GROOT in a simple yet effective encoder-decoder architecture based on causal transformers. We evaluate GROOT against open-world counterparts and human players on a proposed Minecraft SkillForge benchmark. The Elo ratings clearly show that GROOT is closing the human-machine gap as well as exhibiting a 70% winning rate over the best generalist agent baseline. Qualitative analysis of the induced goal space further demonstrates some interesting emergent properties, including the goal composition and complex gameplay behavior synthesis. The project page is available at https://craftjarvis-groot.github.io.


Researchers reconstruct 3D environments from eye reflections

Engadget

Researchers at the University of Maryland have turned eye reflections into (somewhat discernible) 3D scenes. The work builds on Neural Radiance Fields (NeRF), an AI technology that can reconstruct environments from 2D photos. Although the eye-reflection approach has a long way to go before it spawns any practical applications, the study (first reported by Tech Xplore) provides a fascinating glimpse into a technology that could eventually reveal an environment from a series of simple portrait photos. The team used subtle reflections of light captured in human eyes (using consecutive images shot from a single sensor) to try to discern the person's immediate environment. They began with several high-resolution images from a fixed camera position, capturing a moving individual looking toward the camera.


AI 'kill switch' will make humanity less safe, could spawn 'hostile' superintelligence: AI Foundation

FOX News

CEO Rob Meadows and co-founder Lars Buttler discuss the benefits and concerns surrounding artificial intelligence. Executives behind the American artificial intelligence (AI) company AI Foundation are cautioning against implementing kill switches in machine systems, arguing that such a move could increase the chances of a superintelligence that is hostile toward human civilization. According to a new Yale CEO Summit survey, 42% of polled CEOs agreed that AI could potentially end humanity within five to ten years. In citing the study, AI Foundation CMO and Chair Lars Buttler said the debate around AI needs to be elevated and suggested that people react emotionally to the new technology because of a lack of understanding about what is happening behind the scenes. However, both Buttler and CEO Rob Meadows warned of several concerns surrounding the advancement of AI and the possible creation of an artificial general intelligence (AGI) capable of reasoning and decision-making equal to or beyond that of a human. "With AI, you will always have this accidental danger, these accidental problems, you know?


The Spawn of ChatGPT Will Try to Sell You Things

WIRED

ChatGPT, the recently viral and surprisingly articulate chatbot, has dazzled the internet with its ability to dutifully answer all sorts of knotty questions--albeit not always accurately. Some people are now trying to adapt the bot's eloquence to play different roles. They hope to harness the AI like that behind ChatGPT to create programs that can persuade, cajole, and badger with super-human tenacity--in some cases to empower consumers but in others to win sales. Joshua Browder, the CEO of DoNotPay, a company that automates administrative chores including disputing parking fines and requesting compensation from airlines, this week released video of a chatbot negotiating down the price of internet service on a customer's behalf. The negotiator-bot was built on the AI technology that powers ChatGPT. It complains about poor internet service and parries the points made by a Comcast agent in an online chat, successfully negotiating a discount worth $120 annually.


Steps towards prompt-based creation of virtual worlds

Roberts, Jasmine, Banburski-Fahey, Andrzej, Lanier, Jaron

arXiv.org Artificial Intelligence

Multimodal text-to-image models, like DALL-Large language models trained for code generation can be E 2 [34], Midjourney [11] or Stable Diffusion [35] are applied to speaking virtual worlds into existence (creating raising concerns about displacing concept artists and have virtual worlds). In this work we show that prompt-based already won at least one major art competition [36]. Large methods can both accelerate in-VR level editing, as well Language Models (LLMs), like GPT-3 [6], are not only as can become part of gameplay rather than just part of generating very convincing text completions, but have game development. As an example, we present Codex recently become capable of generating code with models VR Pong which shows non-deterministic game mechanics like OpenAI Codex [8] or AlphaCode [25]. We propose using generative processes to not only create static content in this paper that these capabilities can be combined to but also non-trivial interactions between 3D objects. This allow "speaking the world into existence", or taking natural demonstration naturally leads to an integral discussion on language descriptions and turning them into interactive how one would evaluate and benchmark experiences created visual scenes within a game engine. In particular, this by generative models - as there are no qualitative or has the potential for allowing authoring Virtual Reality quantitative metrics that apply in these scenarios. We conclude (VR) experiences from within the headset, as well as allow by discussing impending challenges of AI-assisted completely novel modes of gameplay.


10 tracks that harness the power of artificial intelligence

#artificialintelligence

Despite the numerous AI platforms which serve up routes to auto-generate functional music, many artists who have overtly worked with AI have approached the concept via more individual means. Take Holly Herndon, the Berlin-based composer and musicologist who recently created her own intelligent musical accomplice. Dubbed'Spawn', this vocal-sample generator was taught by Herndon and partner Mat Dryhurst to reproduce a bank of vocal-types (including her own) via months of training its complex neural network. Spawn was able to organically add vocals to tracks presented to it. Though, as Herndon told Art in America, the process is still finding its feet: "AI is not that smart, it's very low fidelity, it's not real time, it's very slow and unwieldy. Spawn can take more than 24 hours to process someone's vocal input. On the other hand, it has some unique capabilities that are pretty exciting-slash-scary. The AI can extract the logic of something outside its operator's own logic and re-create it. This is entirely new for computer music."


Music of the Future: Listen to These Songs Made by Artificial Intelligence

#artificialintelligence

Artificial Intelligence is taking over the music industry. These AI songs are a blend of human and AI technologies like machine learning.