glam/schemas/20251121/linkml/modules/classes/VideoTranscript.yaml
kempersc 51554947a0 feat(schema): Add video content schema with comprehensive examples
Video Schema Classes (9 files):
- VideoPost, VideoComment: Social media video modeling
- VideoTextContent: Base class for text content extraction
- VideoTranscript, VideoSubtitle: Text with timing and formatting
- VideoTimeSegment: Time code handling with ISO 8601 duration
- VideoAnnotation: Base annotation with W3C Web Annotation alignment
- VideoAnnotationTypes: Scene, Object, OCR detection annotations
- VideoChapter, VideoChapterList: Navigation and chapter structure
- VideoAudioAnnotation: Speaker diarization, music, sound events

Enumerations (12 enums):
- VideoDefinitionEnum, LiveBroadcastStatusEnum
- TranscriptFormatEnum, SubtitleFormatEnum, SubtitlePositionEnum
- AnnotationTypeEnum, AnnotationMotivationEnum
- DetectionLevelEnum, SceneTypeEnum, TransitionTypeEnum, TextTypeEnum
- ChapterSourceEnum, AudioEventTypeEnum, SoundEventTypeEnum, MusicTypeEnum

Examples (904 lines, 10 comprehensive heritage-themed examples):
- Rijksmuseum virtual tour chapters (5 chapters with heritage entity refs)
- Operation Night Watch documentary chapters (5 chapters)
- VideoAudioAnnotation: curator interview, exhibition promo, museum lecture

All examples reference real heritage entities with Wikidata IDs:
Q5598 (Rembrandt), Q41264 (Vermeer), Q219831 (The Night Watch)
2025-12-16 20:03:17 +01:00

469 lines
16 KiB
YAML

# Video Transcript Class
# Full text transcription of video audio content
#
# Part of Heritage Custodian Ontology v0.9.5
#
# HIERARCHY:
# E73_Information_Object (CIDOC-CRM)
# │
# └── VideoTextContent (abstract base - provenance)
# │
# └── VideoTranscript (this class)
# │
# └── VideoSubtitle (time-coded extension)
#
# DESIGN RATIONALE:
# VideoTranscript represents the complete textual representation of spoken
# content in a video. It extends VideoTextContent to inherit comprehensive
# provenance tracking and adds transcript-specific slots:
#
# - full_text: Complete transcript as single text block
# - transcript_format: How the text is structured (plain, paragraphed, etc.)
# - segments: Optional structured breakdown into VideoTimeSegments
# - includes_timestamps/speakers: Metadata about content structure
#
# VideoSubtitle extends this because subtitles ARE transcripts plus time-codes.
id: https://nde.nl/ontology/hc/class/VideoTranscript
name: video_transcript_class
title: Video Transcript Class
imports:
- linkml:types
- ./VideoTextContent
- ./VideoTimeSegment
prefixes:
linkml: https://w3id.org/linkml/
hc: https://nde.nl/ontology/hc/
schema: http://schema.org/
dcterms: http://purl.org/dc/terms/
prov: http://www.w3.org/ns/prov#
crm: http://www.cidoc-crm.org/cidoc-crm/
skos: http://www.w3.org/2004/02/skos/core#
default_prefix: hc
classes:
VideoTranscript:
is_a: VideoTextContent
class_uri: crm:E33_Linguistic_Object
abstract: false
description: |
Full text transcription of video audio content.
**DEFINITION**:
A VideoTranscript is the complete textual representation of all spoken
content in a video. It extends VideoTextContent with transcript-specific
properties and inherits all provenance tracking capabilities.
**RELATIONSHIP TO VideoSubtitle**:
VideoSubtitle is a subclass of VideoTranscript because:
1. A subtitle file contains everything a transcript needs PLUS time codes
2. You can derive a plain transcript from subtitles by stripping times
3. This inheritance allows polymorphic handling of text content
```
VideoTranscript VideoSubtitle (is_a VideoTranscript)
├── full_text ├── full_text (inherited)
├── segments[] ├── segments[] (required, with times)
└── (optional times) └── subtitle_format (SRT, VTT, etc.)
```
**SCHEMA.ORG ALIGNMENT**:
Maps to `schema:transcript` property:
> "If this MediaObject is an AudioObject or VideoObject,
> the transcript of that object."
**CIDOC-CRM E33_Linguistic_Object**:
E33 is the class comprising:
> "identifiable expressions in natural language or code"
A transcript is a linguistic object derived from the audio track of
a video (which is itself an E73_Information_Object).
**TRANSCRIPT FORMATS**:
| Format | Description | Use Case |
|--------|-------------|----------|
| PLAIN_TEXT | Continuous text, no structure | Simple search indexing |
| PARAGRAPHED | Text broken into paragraphs | Human reading |
| STRUCTURED | Segments with speaker labels | Research, analysis |
| TIMESTAMPED | Segments with time markers | Navigation, subtitling |
**GENERATION METHODS** (inherited from VideoTextContent):
| Method | Typical Use | Quality |
|--------|-------------|---------|
| ASR_AUTOMATIC | Whisper, Google STT | 0.80-0.95 |
| MANUAL_TRANSCRIPTION | Human transcriber | 0.98-1.0 |
| PLATFORM_PROVIDED | YouTube auto-captions | 0.75-0.90 |
| HYBRID | ASR + human correction | 0.95-1.0 |
**HERITAGE INSTITUTION CONTEXT**:
Transcripts are critical for heritage video collections:
1. **Discovery**: Full-text search over video content
2. **Accessibility**: Deaf/HoH access to spoken content
3. **Preservation**: Text outlasts video format obsolescence
4. **Research**: Corpus analysis, keyword extraction
5. **Translation**: Base for multilingual access
6. **SEO**: Search engine indexing of video content
**STRUCTURED SEGMENTS**:
When `segments` is populated, the transcript has structural breakdown:
```yaml
segments:
- segment_index: 0
start_seconds: 0.0
end_seconds: 5.5
segment_text: "Welcome to the Rijksmuseum."
speaker_label: "Narrator"
confidence: 0.94
- segment_index: 1
start_seconds: 5.5
end_seconds: 12.3
segment_text: "Today we'll explore the Night Watch gallery."
speaker_label: "Narrator"
confidence: 0.91
```
**PROVENANCE** (inherited from VideoTextContent):
All transcripts include:
- `source_video`: Which video was transcribed
- `generated_by`: Tool/person that created transcript
- `generation_method`: ASR_AUTOMATIC, MANUAL_TRANSCRIPTION, etc.
- `generation_timestamp`: When transcript was created
- `overall_confidence`: Aggregate quality score
- `is_verified`: Whether human-reviewed
exact_mappings:
- crm:E33_Linguistic_Object
close_mappings:
- schema:transcript
related_mappings:
- dcterms:Text
slots:
# Core content
- full_text
- transcript_format
# Structural information
- includes_timestamps
- includes_speakers
- segments
# Speaker metadata
- speaker_count
- primary_speaker
# Additional metadata
- source_language_auto_detected
- paragraph_count
- sentence_count
slot_usage:
full_text:
slot_uri: schema:text
description: |
Complete transcript text as a single string.
Schema.org: text for primary textual content.
Contains all spoken content from the video, concatenated.
May include:
- Speaker labels (if includes_speakers = true)
- Timestamps (if includes_timestamps = true)
- Paragraph breaks (if format = PARAGRAPHED or STRUCTURED)
For structured access, use the `segments` slot instead.
range: string
required: true
examples:
- value: |
Welcome to the Rijksmuseum. Today we'll explore the masterpieces
of Dutch Golden Age painting. Our first stop is the Night Watch
by Rembrandt van Rijn, painted in 1642.
description: "Plain text transcript excerpt"
- value: |
[Narrator] Welcome to the Rijksmuseum.
[Narrator] Today we'll explore the masterpieces of Dutch Golden Age painting.
[Curator] Our first stop is the Night Watch by Rembrandt van Rijn.
description: "Transcript with speaker labels"
transcript_format:
slot_uri: dcterms:format
description: |
Format/structure of the transcript text.
Dublin Core: format for resource format.
Indicates how the full_text is structured:
- PLAIN_TEXT: Continuous text without breaks
- PARAGRAPHED: Broken into paragraphs
- STRUCTURED: Includes speaker labels, times, or both
- TIMESTAMPED: Includes inline time markers
range: TranscriptFormatEnum
required: false
ifabsent: "string(PLAIN_TEXT)"
examples:
- value: "STRUCTURED"
description: "Text with speaker labels and paragraph breaks"
includes_timestamps:
slot_uri: hc:includesTimestamps
description: |
Whether the transcript includes time markers.
- **true**: Timestamps are embedded in full_text or segments have times
- **false**: No temporal information (default)
If true, prefer using `segments` for programmatic access.
range: boolean
required: false
ifabsent: "false"
examples:
- value: true
description: "Transcript has time codes"
includes_speakers:
slot_uri: hc:includesSpeakers
description: |
Whether the transcript includes speaker identification.
- **true**: Speaker labels/diarization available
- **false**: Single speaker or no identification (default)
When true, check `speaker_count` for number of distinct speakers.
range: boolean
required: false
ifabsent: "false"
examples:
- value: true
description: "Multi-speaker transcript with diarization"
segments:
slot_uri: hc:transcriptSegments
description: |
Structured breakdown of transcript into time-coded segments.
Optional for VideoTranscript (plain transcripts may not have times).
Required for VideoSubtitle (subtitles must have time codes).
Each segment is a VideoTimeSegment with:
- start_seconds / end_seconds: Time boundaries
- segment_text: Text for this segment
- confidence: Per-segment accuracy score
- speaker_id / speaker_label: Speaker identification
Use segments for:
- Video player synchronization
- Jump-to-time navigation
- Per-segment quality analysis
- Speaker-separated views
range: VideoTimeSegment
required: false
multivalued: true
inlined: true
inlined_as_list: true
examples:
- value: |
- segment_index: 0
start_seconds: 0.0
end_seconds: 3.5
segment_text: "Welcome to the museum."
confidence: 0.95
description: "Single structured segment"
speaker_count:
slot_uri: hc:speakerCount
description: |
Number of distinct speakers identified in the transcript.
Only meaningful when includes_speakers = true.
0 = Unknown/not analyzed
1 = Single speaker (monologue)
2+ = Multi-speaker (dialogue, panel, interview)
range: integer
required: false
minimum_value: 0
examples:
- value: 3
description: "Three speakers identified"
primary_speaker:
slot_uri: hc:primarySpeaker
description: |
Identifier or name of the main/dominant speaker.
For interviews: the interviewee (not interviewer)
For presentations: the presenter
For tours: the guide
May be generic ("Narrator") or specific ("Dr. Taco Dibbits").
range: string
required: false
examples:
- value: "Narrator"
description: "Generic primary speaker"
- value: "Dr. Taco Dibbits, Museum Director"
description: "Named primary speaker"
source_language_auto_detected:
slot_uri: hc:sourceLanguageAutoDetected
description: |
Whether the content_language was auto-detected by ASR.
- **true**: Language detected by ASR model
- **false**: Language was specified/known (default)
Useful for quality assessment - auto-detection may be wrong.
range: boolean
required: false
ifabsent: "false"
examples:
- value: true
description: "Language was auto-detected"
paragraph_count:
slot_uri: hc:paragraphCount
description: |
Number of paragraphs in the transcript.
Only meaningful when transcript_format = PARAGRAPHED or STRUCTURED.
Useful for content sizing and readability assessment.
range: integer
required: false
minimum_value: 0
examples:
- value: 15
description: "Transcript has 15 paragraphs"
sentence_count:
slot_uri: hc:sentenceCount
description: |
Approximate number of sentences in the transcript.
Derived from punctuation analysis or NLP sentence segmentation.
Useful for content analysis and readability metrics.
range: integer
required: false
minimum_value: 0
examples:
- value: 47
description: "Transcript has ~47 sentences"
comments:
- "Full text transcription of video audio content"
- "Extends VideoTextContent with transcript-specific properties"
- "Base class for VideoSubtitle (subtitles are transcripts + time codes)"
- "Supports both plain text and structured segment-based transcripts"
- "Critical for accessibility, discovery, and preservation"
see_also:
- "https://schema.org/transcript"
- "http://www.cidoc-crm.org/cidoc-crm/E33_Linguistic_Object"
# ============================================================================
# Enumerations
# ============================================================================
enums:
TranscriptFormatEnum:
description: |
Format/structure of transcript text content.
Indicates how the full_text is organized.
permissible_values:
PLAIN_TEXT:
description: |
Continuous text without structural markers.
No speaker labels, no timestamps, no paragraph breaks.
Suitable for simple full-text search indexing.
PARAGRAPHED:
description: |
Text broken into paragraphs.
May be based on topic changes, speaker pauses, or semantic units.
Improves human readability.
STRUCTURED:
description: |
Text with speaker labels and/or section markers.
Format: "[Speaker] Text content" or similar.
Enables speaker-specific analysis.
TIMESTAMPED:
description: |
Text with inline time markers.
Format: "[00:30] Text content" or similar.
Enables temporal navigation in text view.
VERBATIM:
description: |
Exact transcription including fillers, false starts, overlaps.
"[um]", "[pause]", "[crosstalk]" markers.
Used for linguistic analysis or legal transcripts.
CLEAN:
description: |
Edited for readability - fillers removed, grammar corrected.
May diverge slightly from literal spoken content.
Suitable for publication or accessibility.
# ============================================================================
# Slot Definitions
# ============================================================================
slots:
full_text:
description: Complete transcript text as single string
range: string
transcript_format:
description: Format/structure of transcript text
range: TranscriptFormatEnum
includes_timestamps:
description: Whether transcript includes time markers
range: boolean
includes_speakers:
description: Whether transcript includes speaker identification
range: boolean
segments:
description: Structured breakdown into time-coded segments
range: VideoTimeSegment
multivalued: true
speaker_count:
description: Number of distinct speakers identified
range: integer
primary_speaker:
description: Identifier/name of main speaker
range: string
source_language_auto_detected:
description: Whether language was auto-detected by ASR
range: boolean
paragraph_count:
description: Number of paragraphs in transcript
range: integer
sentence_count:
description: Number of sentences in transcript
range: integer