FormantMidpointExtractor#
Defined in: voxatlas.features.phonology.formant.midpoint
- class voxatlas.features.phonology.formant.midpoint.FormantMidpointExtractor[source]#
Bases:
BaseExtractorExtract the
phonology.formant.midpointfeature within the VoxAtlas pipeline.This public extractor defines the reusable API for computing
phonology.formant.midpointfrom VoxAtlas structured inputs. It consumesphonemeunits and produces values aligned tophonemeunits, making the extractor a stable pipeline node that can be cited independently of the surrounding execution machinery.Algorithm#
The extractor derives vowel formant structure from aligned phoneme segments and then aggregates those measurements to the declared unit level.
Segment selection Vowel-bearing phoneme spans are isolated from the aligned unit table, and the corresponding waveform segments are converted into short analysis frames.
Resonance estimation Linear-predictive analysis or Parselmouth formant tracking is used to estimate \(F_1\), \(F_2\), and \(F_3\) for each analysis frame.
Metric computation For each vowel segment, the midpoint sample is selected and the corresponding formant vector \((F_1, F_2, F_3)\) is returned.
Packaging The resulting statistic is aligned to
phonemeunits for use in subsequent phonological or conversation-level analyses.
Notes
This extractor declares the upstream dependencies [‘phonology.formant.tracks’] and is executed only after those features are available in the pipeline feature store.
Examples
>>> import pandas as pd >>> from voxatlas.features.feature_input import FeatureInput >>> from voxatlas.features.feature_output import TableFeatureOutput >>> from voxatlas.features.phonology.formant.midpoint import FormantMidpointExtractor >>> from voxatlas.pipeline.feature_store import FeatureStore >>> tracks = pd.DataFrame( ... [ ... {"frame_id": 1, "start": 0.0, "end": 0.01, "time": 0.005, "phoneme_id": 1, "label": "i", "ipa": "i", "is_vowel": 1.0, "F1": 300.0, "F2": 2200.0, "F3": 3000.0}, ... {"frame_id": 2, "start": 0.01, "end": 0.02, "time": 0.015, "phoneme_id": 1, "label": "i", "ipa": "i", "is_vowel": 1.0, "F1": 320.0, "F2": 2180.0, "F3": 2980.0}, ... ] ... ) >>> store = FeatureStore() >>> store.add("phonology.formant.tracks", TableFeatureOutput(feature="phonology.formant.tracks", unit="frame", values=tracks)) >>> out = FormantMidpointExtractor().compute(FeatureInput(audio=None, units=None, context={"feature_store": store}), {}) >>> out.values.shape[0] 1
- name: str = 'phonology.formant.midpoint'#
- input_units: str | None = 'phoneme'#
- output_units: str | None = 'phoneme'#
- dependencies: list[str] = ['phonology.formant.tracks']#
- default_config: dict = {}#
- compute(feature_input, params)[source]#
Compute the extractor output for a single pipeline invocation.
This method is the reusable execution entry point for the extractor. It receives the standard
FeatureInputbundle, applies the configured algorithm, and returns feature values aligned to the extractor output units for storage in the pipeline feature store.- Parameters:
feature_input (object) – Structured extractor input bundling audio, hierarchical units, and execution context for this feature computation.
params (object) – Resolved feature configuration for this invocation. Keys are feature-specific and merged from defaults and pipeline settings.
- Returns:
Structured output aligned to the
phonemeunit level when applicable.- Return type:
FeatureOutput
Examples
>>> import pandas as pd >>> from voxatlas.features.feature_input import FeatureInput >>> from voxatlas.features.feature_output import TableFeatureOutput >>> from voxatlas.features.phonology.formant.midpoint import FormantMidpointExtractor >>> from voxatlas.pipeline.feature_store import FeatureStore >>> tracks = pd.DataFrame( ... [{"frame_id": 1, "start": 0.0, "end": 0.01, "time": 0.005, "phoneme_id": 1, "label": "i", "ipa": "i", "is_vowel": 1.0, "F1": 300.0, "F2": 2200.0, "F3": 3000.0}] ... ) >>> store = FeatureStore() >>> store.add("phonology.formant.tracks", TableFeatureOutput(feature="phonology.formant.tracks", unit="frame", values=tracks)) >>> result = FormantMidpointExtractor().compute(FeatureInput(audio=None, units=None, context={"feature_store": store}), {}) >>> result.unit 'phoneme'