The most common approach to automatic summarization and highlight detection in sports video is to train an automatic classifier to detect semantic highlights based on occurrences of low-level features such as action replays, excited commentators or changes in a scoreboard. We propose an alternative approach based on the detection of perception concepts (PCs) and the construction of Petri-Nets, which can be used for both semantic description and event detection within sports videos. Low-level algorithms to detect PCs using visual, aural and motion characteristics are proposed, and a series of Petri-Nets composed of PCs is formally defined to describe video content. We call this a perception concept network–Petri-Net (PCN–PN) model. Using PCN–PNs, personalized high-level semantic descriptions of video highlights can be facilitated and queries on high-level semantics can be achieved. A particular strength of this framework is that we can easily build semantic detectors based on PCN–PNs to search within sports videos and locate interesting events. Experimental results based on recorded sports video data across three types of sports games (soccer, basketball and rugby), and each from multiple broadcasters, are used to illustrate the potential of this framework.

You do not currently have access to this article.