VOXReality

funded by the European Union

VOXReality’s goal is to conduct research and develop new AI models to drive future XR interactive experiences, and to deliver these models to the wider European market. These new models will address human-to-human interaction in unidirectional (theatre) and bidirectional (conference) settings, as well as human-to-machine interaction by building the next generation of personal assistants.

VOXReality will develop large-scale self-supervised models that will be fine-tuned to specific downstream tasks with minimal re-training. At the same time, we will rely on modern training approaches for designing models that include subnetworks with a common representation power but are more targeted towards specific architectures. By leveraging the once-model-for-all concept from the model training (large-scale self-supervision) and deployment (jointly learning sub-networks) perspective, we will be able to provide a catalogue of highly generic models with high representation capacity that will be efficiently specialized for downstream tasks.

UM (DACS) is the scientific coordinator of the project and our team is responsible for developing state-of-the-art automatic speech recongition (ASR) and neural machine translation (NMT) models with emphasis on non-native speech and the efficient use of context.

Team members are:

(Maka et al., 2024) (Issam et al., 2024) (missing reference)

References

2024

  1. Sequence Shortening for Context-Aware Machine Translation
    Paweł Maka, Yusuf Semerci, Jan Scholtes, and Gerasimos Spanakis
    In Findings of the Association for Computational Linguistics: EACL 2024, Mar 2024
  2. Fixed and Adaptive Simultaneous Machine Translation Strategies Using Adapters
    Abderrahmane Issam, Yusuf Can Semerci, Jan Scholtes, and Gerasimos Spanakis
    In Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024), Aug 2024