Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | CIKM 2024 |
Untertitel | Proceedings of the 33rd ACM International Conference on Information and Knowledge Management |
Seiten | 1743-1751 |
Seitenumfang | 9 |
ISBN (elektronisch) | 9798400704369 |
Publikationsstatus | Veröffentlicht - 21 Okt. 2024 |
Veranstaltung | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, USA / Vereinigte Staaten Dauer: 21 Okt. 2024 → 25 Okt. 2024 |
Abstract
Identifying the regions of a learning resource that a learner pays attention to is crucial for assessing the material's impact and improving its design and related support systems. Saliency detection in videos addresses the automatic recognition of attention-drawing regions in single frames. In educational settings, the recognition of pertinent regions in a video's visual stream can enhance content accessibility and information retrieval tasks such as video segmentation, navigation, and summarization. Such advancements can pave the way for the development of advanced AI-assisted technologies that support learning with greater efficacy. However, this task becomes particularly challenging for educational videos due to the combination of unique characteristics such as text, voice, illustrations, animations, and more. To the best of our knowledge, there is currently no study that evaluates saliency detection approaches in educational videos. In this paper, we address this gap by evaluating four state-of-the-art saliency detection approaches for educational videos. We reproduce the original studies and explore the replication capabilities for general-purpose (non-educational) datasets. Then, we investigate the generalization capabilities of the models and evaluate their performance on educational videos. We conduct a comprehensive analysis to identify common failure scenarios and possible areas of improvement. Our experimental results show that educational videos remain a challenging context for generic video saliency detection models.
ASJC Scopus Sachgebiete
- Betriebswirtschaft, Management und Rechnungswesen (insg.)
- Allgemeine Unternehmensführung und Buchhaltung
- Entscheidungswissenschaften (insg.)
- Allgemeine Entscheidungswissenschaften
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. S. 1743-1751.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Saliency Detection in Educational Videos
T2 - 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024
AU - Navarrete, Evelyn
AU - Ewerth, Ralph
AU - Hoppe, Anett
N1 - Publisher Copyright: © 2024 Owner/Author.
PY - 2024/10/21
Y1 - 2024/10/21
N2 - Identifying the regions of a learning resource that a learner pays attention to is crucial for assessing the material's impact and improving its design and related support systems. Saliency detection in videos addresses the automatic recognition of attention-drawing regions in single frames. In educational settings, the recognition of pertinent regions in a video's visual stream can enhance content accessibility and information retrieval tasks such as video segmentation, navigation, and summarization. Such advancements can pave the way for the development of advanced AI-assisted technologies that support learning with greater efficacy. However, this task becomes particularly challenging for educational videos due to the combination of unique characteristics such as text, voice, illustrations, animations, and more. To the best of our knowledge, there is currently no study that evaluates saliency detection approaches in educational videos. In this paper, we address this gap by evaluating four state-of-the-art saliency detection approaches for educational videos. We reproduce the original studies and explore the replication capabilities for general-purpose (non-educational) datasets. Then, we investigate the generalization capabilities of the models and evaluate their performance on educational videos. We conduct a comprehensive analysis to identify common failure scenarios and possible areas of improvement. Our experimental results show that educational videos remain a challenging context for generic video saliency detection models.
AB - Identifying the regions of a learning resource that a learner pays attention to is crucial for assessing the material's impact and improving its design and related support systems. Saliency detection in videos addresses the automatic recognition of attention-drawing regions in single frames. In educational settings, the recognition of pertinent regions in a video's visual stream can enhance content accessibility and information retrieval tasks such as video segmentation, navigation, and summarization. Such advancements can pave the way for the development of advanced AI-assisted technologies that support learning with greater efficacy. However, this task becomes particularly challenging for educational videos due to the combination of unique characteristics such as text, voice, illustrations, animations, and more. To the best of our knowledge, there is currently no study that evaluates saliency detection approaches in educational videos. In this paper, we address this gap by evaluating four state-of-the-art saliency detection approaches for educational videos. We reproduce the original studies and explore the replication capabilities for general-purpose (non-educational) datasets. Then, we investigate the generalization capabilities of the models and evaluate their performance on educational videos. We conduct a comprehensive analysis to identify common failure scenarios and possible areas of improvement. Our experimental results show that educational videos remain a challenging context for generic video saliency detection models.
KW - educational videos
KW - video saliency detection
KW - video-based learning
UR - http://www.scopus.com/inward/record.url?scp=85210038718&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2408.04515
DO - 10.48550/arXiv.2408.04515
M3 - Conference contribution
AN - SCOPUS:85210038718
SP - 1743
EP - 1751
BT - CIKM 2024
Y2 - 21 October 2024 through 25 October 2024
ER -