glam/frontend/public/schemas/20251121/linkml/modules/classes/VideoTranscript.yaml
kempersc b34992b1d3 Migrate all 293 class files to ontology-aligned slots
Extends migration to all class types (museums, libraries, galleries, etc.)

New slots added to class_metadata_slots.yaml:
- RiC-O: rico_record_set_type, rico_organizational_principle,
  rico_has_or_had_holder, rico_note
- Multilingual: label_de, label_es, label_fr, label_nl, label_it, label_pt
- Scope: scope_includes, scope_excludes, custodian_only,
  organizational_level, geographic_restriction
- Notes: privacy_note, preservation_note, legal_note

Migration script now handles 30+ annotation types.
All migrated schemas pass linkml-validate.

Total: 387 class files now use proper slots instead of annotations.
2026-01-06 12:24:54 +01:00

351 lines
14 KiB
YAML

id: https://nde.nl/ontology/hc/class/VideoTranscript
name: video_transcript_class
title: Video Transcript Class
imports:
- linkml:types
- ./VideoTextContent
- ./VideoTimeSegment
- ../slots/class_metadata_slots
prefixes:
linkml: https://w3id.org/linkml/
hc: https://nde.nl/ontology/hc/
schema: http://schema.org/
dcterms: http://purl.org/dc/terms/
prov: http://www.w3.org/ns/prov#
crm: http://www.cidoc-crm.org/cidoc-crm/
skos: http://www.w3.org/2004/02/skos/core#
default_prefix: hc
classes:
VideoTranscript:
is_a: VideoTextContent
class_uri: crm:E33_Linguistic_Object
abstract: false
description: "Full text transcription of video audio content.\n\n**DEFINITION**:\n\
\nA VideoTranscript is the complete textual representation of all spoken\ncontent\
\ in a video. It extends VideoTextContent with transcript-specific\nproperties\
\ and inherits all provenance tracking capabilities.\n\n**RELATIONSHIP TO VideoSubtitle**:\n\
\nVideoSubtitle is a subclass of VideoTranscript because:\n1. A subtitle file\
\ contains everything a transcript needs PLUS time codes\n2. You can derive\
\ a plain transcript from subtitles by stripping times\n3. This inheritance\
\ allows polymorphic handling of text content\n\n```\nVideoTranscript \
\ VideoSubtitle (is_a VideoTranscript)\n├── full_text ├── full_text\
\ (inherited)\n├── segments[] ├── segments[] (required, with times)\n\
└── (optional times) └── subtitle_format (SRT, VTT, etc.)\n```\n\n**SCHEMA.ORG\
\ ALIGNMENT**:\n\nMaps to `schema:transcript` property:\n> \"If this MediaObject\
\ is an AudioObject or VideoObject, \n> the transcript of that object.\"\n\n\
**CIDOC-CRM E33_Linguistic_Object**:\n\nE33 is the class comprising:\n> \"identifiable\
\ expressions in natural language or code\"\n\nA transcript is a linguistic\
\ object derived from the audio track of\na video (which is itself an E73_Information_Object).\n\
\n**TRANSCRIPT FORMATS**:\n\n| Format | Description | Use Case |\n|--------|-------------|----------|\n\
| PLAIN_TEXT | Continuous text, no structure | Simple search indexing |\n| PARAGRAPHED\
\ | Text broken into paragraphs | Human reading |\n| STRUCTURED | Segments with\
\ speaker labels | Research, analysis |\n| TIMESTAMPED | Segments with time\
\ markers | Navigation, subtitling |\n\n**GENERATION METHODS** (inherited from\
\ VideoTextContent):\n\n| Method | Typical Use | Quality |\n|--------|-------------|---------|\n\
| ASR_AUTOMATIC | Whisper, Google STT | 0.80-0.95 |\n| MANUAL_TRANSCRIPTION\
\ | Human transcriber | 0.98-1.0 |\n| PLATFORM_PROVIDED | YouTube auto-captions\
\ | 0.75-0.90 |\n| HYBRID | ASR + human correction | 0.95-1.0 |\n\n**HERITAGE\
\ INSTITUTION CONTEXT**:\n\nTranscripts are critical for heritage video collections:\n\
\n1. **Discovery**: Full-text search over video content\n2. **Accessibility**:\
\ Deaf/HoH access to spoken content\n3. **Preservation**: Text outlasts video\
\ format obsolescence\n4. **Research**: Corpus analysis, keyword extraction\n\
5. **Translation**: Base for multilingual access\n6. **SEO**: Search engine\
\ indexing of video content\n\n**STRUCTURED SEGMENTS**:\n\nWhen `segments` is\
\ populated, the transcript has structural breakdown:\n\n```yaml\nsegments:\n\
\ - segment_index: 0\n start_seconds: 0.0\n end_seconds: 5.5\n segment_text:\
\ \"Welcome to the Rijksmuseum.\"\n speaker_label: \"Narrator\"\n confidence:\
\ 0.94\n - segment_index: 1\n start_seconds: 5.5\n end_seconds: 12.3\n\
\ segment_text: \"Today we'll explore the Night Watch gallery.\"\n speaker_label:\
\ \"Narrator\"\n confidence: 0.91\n```\n\n**PROVENANCE** (inherited from\
\ VideoTextContent):\n\nAll transcripts include:\n- `source_video`: Which video\
\ was transcribed\n- `generated_by`: Tool/person that created transcript\n-\
\ `generation_method`: ASR_AUTOMATIC, MANUAL_TRANSCRIPTION, etc.\n- `generation_timestamp`:\
\ When transcript was created\n- `overall_confidence`: Aggregate quality score\n\
- `is_verified`: Whether human-reviewed\n"
exact_mappings:
- crm:E33_Linguistic_Object
close_mappings:
- schema:transcript
related_mappings:
- dcterms:Text
slots:
- full_text
- includes_speakers
- includes_timestamps
- paragraph_count
- primary_speaker
- segments
- sentence_count
- source_language_auto_detected
- speaker_count
- specificity_annotation
- template_specificity
- transcript_format
slot_usage:
full_text:
slot_uri: schema:text
description: |
Complete transcript text as a single string.
Schema.org: text for primary textual content.
Contains all spoken content from the video, concatenated.
May include:
- Speaker labels (if includes_speakers = true)
- Timestamps (if includes_timestamps = true)
- Paragraph breaks (if format = PARAGRAPHED or STRUCTURED)
For structured access, use the `segments` slot instead.
range: string
required: true
examples:
- value: |
Welcome to the Rijksmuseum. Today we'll explore the masterpieces
of Dutch Golden Age painting. Our first stop is the Night Watch
by Rembrandt van Rijn, painted in 1642.
description: Plain text transcript excerpt
- value: |
[Narrator] Welcome to the Rijksmuseum.
[Narrator] Today we'll explore the masterpieces of Dutch Golden Age painting.
[Curator] Our first stop is the Night Watch by Rembrandt van Rijn.
description: Transcript with speaker labels
transcript_format:
slot_uri: dcterms:format
description: |
Format/structure of the transcript text.
Dublin Core: format for resource format.
Indicates how the full_text is structured:
- PLAIN_TEXT: Continuous text without breaks
- PARAGRAPHED: Broken into paragraphs
- STRUCTURED: Includes speaker labels, times, or both
- TIMESTAMPED: Includes inline time markers
range: TranscriptFormatEnum
required: false
ifabsent: string(PLAIN_TEXT)
examples:
- value: STRUCTURED
description: Text with speaker labels and paragraph breaks
includes_timestamps:
slot_uri: hc:includesTimestamps
description: |
Whether the transcript includes time markers.
- **true**: Timestamps are embedded in full_text or segments have times
- **false**: No temporal information (default)
If true, prefer using `segments` for programmatic access.
range: boolean
required: false
ifabsent: 'false'
examples:
- value: true
description: Transcript has time codes
includes_speakers:
slot_uri: hc:includesSpeakers
description: |
Whether the transcript includes speaker identification.
- **true**: Speaker labels/diarization available
- **false**: Single speaker or no identification (default)
When true, check `speaker_count` for number of distinct speakers.
range: boolean
required: false
ifabsent: 'false'
examples:
- value: true
description: Multi-speaker transcript with diarization
segments:
slot_uri: hc:transcriptSegments
description: |
Structured breakdown of transcript into time-coded segments.
Optional for VideoTranscript (plain transcripts may not have times).
Required for VideoSubtitle (subtitles must have time codes).
Each segment is a VideoTimeSegment with:
- start_seconds / end_seconds: Time boundaries
- segment_text: Text for this segment
- confidence: Per-segment accuracy score
- speaker_id / speaker_label: Speaker identification
Use segments for:
- Video player synchronization
- Jump-to-time navigation
- Per-segment quality analysis
- Speaker-separated views
range: VideoTimeSegment
required: false
multivalued: true
inlined: true
inlined_as_list: true
examples:
- value: |
- segment_index: 0
start_seconds: 0.0
end_seconds: 3.5
segment_text: "Welcome to the museum."
confidence: 0.95
description: Single structured segment
speaker_count:
slot_uri: hc:speakerCount
description: |
Number of distinct speakers identified in the transcript.
Only meaningful when includes_speakers = true.
0 = Unknown/not analyzed
1 = Single speaker (monologue)
2+ = Multi-speaker (dialogue, panel, interview)
range: integer
required: false
minimum_value: 0
examples:
- value: 3
description: Three speakers identified
primary_speaker:
slot_uri: hc:primarySpeaker
description: |
Identifier or name of the main/dominant speaker.
For interviews: the interviewee (not interviewer)
For presentations: the presenter
For tours: the guide
May be generic ("Narrator") or specific ("Dr. Taco Dibbits").
range: string
required: false
examples:
- value: Narrator
description: Generic primary speaker
- value: Dr. Taco Dibbits, Museum Director
description: Named primary speaker
source_language_auto_detected:
slot_uri: hc:sourceLanguageAutoDetected
description: |
Whether the content_language was auto-detected by ASR.
- **true**: Language detected by ASR model
- **false**: Language was specified/known (default)
Useful for quality assessment - auto-detection may be wrong.
range: boolean
required: false
ifabsent: 'false'
examples:
- value: true
description: Language was auto-detected
paragraph_count:
slot_uri: hc:paragraphCount
description: |
Number of paragraphs in the transcript.
Only meaningful when transcript_format = PARAGRAPHED or STRUCTURED.
Useful for content sizing and readability assessment.
range: integer
required: false
minimum_value: 0
examples:
- value: 15
description: Transcript has 15 paragraphs
sentence_count:
slot_uri: hc:sentenceCount
description: |
Approximate number of sentences in the transcript.
Derived from punctuation analysis or NLP sentence segmentation.
Useful for content analysis and readability metrics.
range: integer
required: false
minimum_value: 0
examples:
- value: 47
description: Transcript has ~47 sentences
specificity_annotation:
range: SpecificityAnnotation
inlined: true
template_specificity:
range: TemplateSpecificityScores
inlined: true
comments:
- Full text transcription of video audio content
- Extends VideoTextContent with transcript-specific properties
- Base class for VideoSubtitle (subtitles are transcripts + time codes)
- Supports both plain text and structured segment-based transcripts
- Critical for accessibility, discovery, and preservation
see_also:
- https://schema.org/transcript
- http://www.cidoc-crm.org/cidoc-crm/E33_Linguistic_Object
enums:
TranscriptFormatEnum:
description: |
Format/structure of transcript text content.
Indicates how the full_text is organized.
permissible_values:
PLAIN_TEXT:
description: |
Continuous text without structural markers.
No speaker labels, no timestamps, no paragraph breaks.
Suitable for simple full-text search indexing.
PARAGRAPHED:
description: |
Text broken into paragraphs.
May be based on topic changes, speaker pauses, or semantic units.
Improves human readability.
STRUCTURED:
description: |
Text with speaker labels and/or section markers.
Format: "[Speaker] Text content" or similar.
Enables speaker-specific analysis.
TIMESTAMPED:
description: |
Text with inline time markers.
Format: "[00:30] Text content" or similar.
Enables temporal navigation in text view.
VERBATIM:
description: |
Exact transcription including fillers, false starts, overlaps.
"[um]", "[pause]", "[crosstalk]" markers.
Used for linguistic analysis or legal transcripts.
CLEAN:
description: |
Edited for readability - fillers removed, grammar corrected.
May diverge slightly from literal spoken content.
Suitable for publication or accessibility.
slots:
full_text:
description: Complete transcript text as single string
range: string
transcript_format:
description: Format/structure of transcript text
range: TranscriptFormatEnum
includes_timestamps:
description: Whether transcript includes time markers
range: boolean
includes_speakers:
description: Whether transcript includes speaker identification
range: boolean
segments:
description: Structured breakdown into time-coded segments
range: VideoTimeSegment
multivalued: true
speaker_count:
description: Number of distinct speakers identified
range: integer
primary_speaker:
description: Identifier/name of main speaker
range: string
source_language_auto_detected:
description: Whether language was auto-detected by ASR
range: boolean
paragraph_count:
description: Number of paragraphs in transcript
range: integer
sentence_count:
description: Number of sentences in transcript
range: integer