A Context-Aware Loss Function for Action Spotting in Soccer Videos

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Marc Van Droogenbroeck, Rikke Gade, Anthony Cioppa, Silvio Giancola, Adrien Deli├Ęge, Bernard Ghanem, Thomas B. Moeslund
Journal/Conference Name CVPR 2020 6
Paper Category
Paper Abstract In video understanding, action spotting consists in temporally localizing human-induced events annotated with single timestamps. In this paper, we propose a novel loss function that specifically considers the temporal context naturally present around each action, rather than focusing on the single annotated frame to spot. We benchmark our loss on a large dataset of soccer videos, SoccerNet, and achieve an improvement of 12.8% over the baseline. We show the generalization capability of our loss for generic activity proposals and detection on ActivityNet, by spotting the beginning and the end of each activity. Furthermore, we provide an extended ablation study and display challenging cases for action spotting in soccer videos. Finally, we qualitatively illustrate how our loss induces a precise temporal understanding of actions and show how such semantic knowledge can be used for automatic highlights generation.
Date of publication 2019
Code Programming Language Jupyter Notebook
Comment

Copyright Researcher 2022