Exploring Fusion Strategies for Multimodal Vision-Language Systems
–arXiv.org Artificial Intelligence
Modern machine learning models often combine multiple input streams of data to more accurately capture the information that informs their decisions. In multimodal machine learning, choosing the strategy for fusing data together requires careful consideration of the application's accuracy and latency requirements, as fusing the data at earlier or later stages in the model architecture can lead to performance changes in accuracy and latency. T o demonstrate this trade-off, we investigate different fusion strategies using a hybrid BERT and vision network framework that integrates image and text data. W e explore two different vision networks: MobileNetV2 and ViT. W e propose three models for each vision network, which fuse data at late, intermediate, and early stages in the architecture. W e evaluate the proposed models on the CMU-MOSI dataset and benchmark their latency on an NVIDIA Jetson Orin AGX. Our experimental results demonstrate that while late fusion yields the highest accuracy, early fusion offers the lowest inference latency. W e describe the three proposed model architectures and discuss the accuracy and latency trade-offs, concluding that data fusion earlier in the model architecture results in faster inference times at the cost of accuracy.
arXiv.org Artificial Intelligence
Dec-1-2025
- Country:
- North America > United States > South Carolina > Richland County > Columbia (0.50)
- Genre:
- Overview (0.93)
- Research Report (1.00)
- Industry:
- Education > Educational Setting
- Higher Education (0.40)
- Information Technology (0.49)
- Education > Educational Setting
- Technology: