Multi-interaction TTS toward professional recording reproduction
Kanagawa, Hiroki, Fujita, Kenichi, Watanabe, Aya, Ijima, Yusuke
–arXiv.org Artificial Intelligence
V oice directors often iteratively refine voice actors' performances by providing feedback to achieve the desired outcome. While this iterative feedback-based refinement process is important in actual recordings, it has been overlooked in text-to-speech synthesis (TTS). As a result, fine-grained style refinement after the initial synthesis is not possible, even though the synthesized speech often deviates from the user's intended style. To address this issue, we propose a TTS method with multi-step interaction that allows users to intuitively and rapidly refine synthesized speech. Our approach models the interaction between the TTS model and its user to emulate the relationship between voice actors and voice directors. Experiments show that the proposed model with its corresponding dataset enables iterative style refinements in accordance with users' directions, thus demonstrating its multi-interaction capability.
arXiv.org Artificial Intelligence
Jul-3-2025
- Country:
- Asia > Japan > Honshū
- Chūbu > Ishikawa Prefecture
- Kanazawa (0.04)
- Kantō
- Kanagawa Prefecture (0.41)
- Tokyo Metropolis Prefecture > Tokyo (0.04)
- Chūbu > Ishikawa Prefecture
- Asia > Japan > Honshū
- Genre:
- Research Report > New Finding (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (0.68)
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (0.94)
- Large Language Model (1.00)
- Speech > Speech Synthesis (0.72)
- Vision (0.68)
- Information Technology > Artificial Intelligence