FormantOnsetMidOffsetExtractor#
Defined in: voxatlas.features.phonology.formant.onset_mid_offset
- class voxatlas.features.phonology.formant.onset_mid_offset.FormantOnsetMidOffsetExtractor[source]#
Bases:
BaseExtractorExtract the
phonology.formant.onset_mid_offsetfeature within the VoxAtlas pipeline.This public extractor defines the reusable API for computing
phonology.formant.onset_mid_offsetfrom VoxAtlas structured inputs. It consumesphonemeunits and produces values aligned tophonemeunits, making the extractor a stable pipeline node that can be cited independently of the surrounding execution machinery.Algorithm#
The extractor derives vowel formant structure from aligned phoneme segments and then aggregates those measurements to the declared unit level.
Segment selection Vowel-bearing phoneme spans are isolated from the aligned unit table, and the corresponding waveform segments are converted into short analysis frames.
Resonance estimation Linear-predictive analysis or Parselmouth formant tracking is used to estimate \(F_1\), \(F_2\), and \(F_3\) for each analysis frame.
Metric computation The extractor samples formants at normalized onset, midpoint, and offset positions to provide a compact dynamic descriptor.
Packaging The resulting statistic is aligned to
phonemeunits for use in subsequent phonological or conversation-level analyses.
Notes
This extractor declares the upstream dependencies [‘phonology.formant.tracks’] and is executed only after those features are available in the pipeline feature store.
Examples
>>> import pandas as pd >>> from voxatlas.features.feature_input import FeatureInput >>> from voxatlas.features.feature_output import TableFeatureOutput >>> from voxatlas.features.phonology.formant.onset_mid_offset import FormantOnsetMidOffsetExtractor >>> from voxatlas.pipeline.feature_store import FeatureStore >>> tracks = pd.DataFrame( ... [ ... {"frame_id": 1, "start": 0.0, "end": 0.01, "time": 0.005, "phoneme_id": 1, "label": "i", "ipa": "i", "is_vowel": 1.0, "F1": 300.0, "F2": 2200.0, "F3": 3000.0}, ... {"frame_id": 2, "start": 0.01, "end": 0.02, "time": 0.015, "phoneme_id": 1, "label": "i", "ipa": "i", "is_vowel": 1.0, "F1": 320.0, "F2": 2180.0, "F3": 2980.0}, ... ] ... ) >>> store = FeatureStore() >>> store.add("phonology.formant.tracks", TableFeatureOutput(feature="phonology.formant.tracks", unit="frame", values=tracks)) >>> out = FormantOnsetMidOffsetExtractor().compute(FeatureInput(audio=None, units=None, context={"feature_store": store}), {}) >>> any(col.endswith("_onset") for col in out.values.columns) True
- name: str = 'phonology.formant.onset_mid_offset'#
- input_units: str | None = 'phoneme'#
- output_units: str | None = 'phoneme'#
- dependencies: list[str] = ['phonology.formant.tracks']#
- default_config: dict = {}#
- compute(feature_input, params)[source]#
Compute the extractor output for a single pipeline invocation.
This method is the reusable execution entry point for the extractor. It receives the standard
FeatureInputbundle, applies the configured algorithm, and returns feature values aligned to the extractor output units for storage in the pipeline feature store.- Parameters:
feature_input (object) – Structured extractor input bundling audio, hierarchical units, and execution context for this feature computation.
params (object) – Resolved feature configuration for this invocation. Keys are feature-specific and merged from defaults and pipeline settings.
- Returns:
Structured output aligned to the
phonemeunit level when applicable.- Return type:
FeatureOutput
Examples
>>> import pandas as pd >>> from voxatlas.features.feature_input import FeatureInput >>> from voxatlas.features.feature_output import TableFeatureOutput >>> from voxatlas.features.phonology.formant.onset_mid_offset import FormantOnsetMidOffsetExtractor >>> from voxatlas.pipeline.feature_store import FeatureStore >>> tracks = pd.DataFrame( ... [{"frame_id": 1, "start": 0.0, "end": 0.01, "time": 0.005, "phoneme_id": 1, "label": "i", "ipa": "i", "is_vowel": 1.0, "F1": 300.0, "F2": 2200.0, "F3": 3000.0}] ... ) >>> store = FeatureStore() >>> store.add("phonology.formant.tracks", TableFeatureOutput(feature="phonology.formant.tracks", unit="frame", values=tracks)) >>> result = FormantOnsetMidOffsetExtractor().compute(FeatureInput(audio=None, units=None, context={"feature_store": store}), {}) >>> result.unit 'phoneme'