Skip to main content

Analyze Task

Usage

The Analyze task is used to collect information about a media file in a Hybrik job. Most often, it follows the initial source element in a job in order to provide information about the source for decisions in tasks later in the workflow. An analyze task can also be placed immediately following a transcode task, as a way to check the results of the transcode task.

When an analyze task is set with only general_properties enabled, basic file information is collected - essentially the container metadata that is available without actually decoding the file (bitrate, resolution, etc). Adding the deep_properties object allows for deeper visual and sound analyses to be performed. For more details on the types of analyses that can be run, as well as the results they provide, see the tutorials for general_properties and deep_properties.

Analyze results can also be referenced in a QC task condition to verify certain properties of your source file, such as the file bitrate, number of audio tracks, video quality as compared to the source, or whether there are periods of silence or black in your content.

The following are typical workflows using an analyze_task:

Analyze Source

This simple example job collects basic information about the source file.

analyze_01

Analyze Source, QC, Transcode

This example job has an analyze_task followed by a qc_task, which uses the result of the analysis to verify that the source contains audio. If the qc_task passes, the transcode_task is run:

analyze_02

Analyze Source, Transcode with Crop Filter

This example job has an analyze_task that includes a deep_properties element which decodes the source to detect black borders (aka letterboxing). If present, the pixel values of the extent of black borders are later applied in the video crop filter in the transcode task:

analyze_03

Transcode, Analyze (compare to source), Produce Quality Report

This diagram shows a job which transcodes the source, and then runs an analyze task to collect and report quality results of the output compared to the original source:

analyze_04

Source Pipeline

The source_pipeline object provides a way to control some aspects of the file to be analyzed, in order to manipulate components into a format suited for the analysis to be performed. Possible operations could be to re-map audio or trim the source to only analyze a portion of the content.

Audio Mapping

For example, if your source file contains 4 mono audio tracks, you could use audio mapping in the source_pipeline to configure those tracks as two 2-channel stereo tracks for separate ebur128 analysis via Dolby Professional Loudness:

{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"source_pipeline": {
"contents": [
{
"kind": "audio",
"map": [
{
"input": {
"track": 0,
"channel": 0
},
"output": {
"track": 0,
"channel": 0
}
},
{
"input": {
"track": 1,
"channel": 0
},
"output": {
"track": 0,
"channel": 1
}
},
{
"input": {
"track": 2,
"channel": 0
},
"output": {
"track": 1,
"channel": 0
}
},
{
"input": {
"track": 3,
"channel": 0
},
"output": {
"track": 1,
"channel": 1
}
}
]
}
]
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"dplc": {
"enabled": true,
"regulation_type": "ebu_r128"
},
"track_selector": {
"index": 0
}
},
{
"dplc": {
"enabled": true,
"regulation_type": "ebu_r128"
},
"track_selector": {
"index": 1
}
}
]
}
}
}

The same could be done for taking a set of 5.1 discrete channels and mapping them into a single track for analysis.

Trim

You may also want to adjust the inpoint and/or outpoint of the file being analyzed. An example would be if your content includes bars & tone for the first 30 seconds, and you don't want the analysis to include that portion. You can use the trim object in the source_pipeline as shown below to omit that portion of the source from analysis:

{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"source_pipeline": {
"trim": {
"inpoint_frame": "30"
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"levels": {
"enabled": true
}
}
}
}
}

General Properties

General properties are analyzed if the enabled parameter is set to true:

{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
}
}
}

The results from an analyze task with only general_properties enabled will include basic information and metadata about your media file, similar to what MediaInfo provides.

These results are available in the job summary json, and could be used to collect and store information about the source file. To locate this section in a job summary json, search for "analyzer", and the information will be directly below.

The following is an excerpt from a job summary json which included an analyze task with general_properties enabled:

  "analyzer": {
"general_properties": {
"container": {
"kind": "mov",
"duration_sec": 300.3,
"bitrate_kb": 8000,
"size_kb": 326952
},
"audio": [
{
"pid": 2,
"sample_format": "pcm_s16le",
"codec": "pcm",
"sample_rate": 48000,
"channels": 6,
"sample_size": 16,
"language": "en",
"duration_sec": 300.3,
"bitrate_mode": "cbr",
"bitrate_kb": 4608,
"channel_order": "L R C LFE Ls Rs"
}
],
"video": {
"pid": 1,
"codec": "h264",
"profile": "high",
"level": 4,
"bitrate_kb": 2768.533,
"frame_rate": 23.976023976023978,
"height": 1080,
"width": 1920,
"interlace_mode": "progressive",
"dar": 1.778,
"par": 1,
"chroma_format": "yuv420p",
"duration_sec": 300.3,
"frame_rate_mode": "constant",
"clean_aperture_height": 1080,
"clean_aperture_width": 1920,
"bit_resolution": 8,
"color_space": "YUV"
}
}
}

To control how mov atoms are reported in your analysis results, include the mov_atom_descriptor_style parameter in the general_properties object:

{
"general_properties": {
"enabled": true,
"mov_atom_descriptor_style": "by_track"
}
}

The options for mov_atom_descriptor_style are:

  • none
    • do not list atoms.
  • condensed
    • pick the most important atoms and list linearly with the belonging tracks.
  • by_track
    • show the full hierarchy but list along with tracks.
  • full
    • show the full file hierarchy in the asset element.

Deep Properties

Deep properties in an analyze task will decode the media and measure specific technical properties. Some of these measurements can be used in downstream transcode tasks, such as audio normalization or cropping video to remove letterboxing. Others will return quality results that may be used for validation in quality control tasks. You can learn more about our QC task in the QC Task Tutorial

Compare Asset

Certain deep_properties analysis types perform a comparison between two files - usually the output from a transcode task and the source from which it was derived. Available types are psnr (audio & video), ssim (video), ssim_ms (video), vmaf (video).

These analysis types require the inclusion of a compare_asset object in the analyze_task payload. See the included job json for VMAF as an example.

{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
}

Comparative Analysis Filters

If your job had filters applied during the transcode, in order to properly run a comparative analysis of the output against the source, you'll need to apply the same filter(s) to your source in the analyze task. For this, you include a settings object in the deep_properties video or audio section as shown below:

{
"deep_properties": {
"video": {
"ssim": {
"enabled": true
},
"settings": {
"comparative": {
"compare_filters": [
{
"kind": "fade",
"payload": {
"mode": "in",
"start_sec": 0,
"duration_sec": 3
}
}
]
}
}
}
}
}

Available compare_filters:

There are settings available to control certain aspects of the comparison operation, such as which of the two files (reference or product) is scaled to match the other prior to analysis, or which file's chroma format will be used as the reference.

Full details are available here: size_selector & chroma_format_selector

Deep Properties Analysis Types

Each analysis type can include an "is_optional" parameter. If set to true, the analyze_task will not fail if this media type (audio/video) does not exist in the source.

{
"deep_properties": {
"audio": [
{
"silence": {
"enabled": true,
"is_optional": true,
"noise_db": -70,
"duration_sec": 4
}
}
]
}
}

The following are some of the available deep properties for audio & video. Details about their functions can be found in the API Docs: deep_properties

deep properties, audio:

  • track_selector
    • Mechanism to select a specific audio track.
  • levels
  • ebur128
    • Performs an EBU R.128 loudness determination on the audio track(s).
  • dolby_professional_loudness
  • volume
    • Uses simple volume measurement. This is less precise than using deep_stats, but of higher performance.
  • silence
  • psnr
  • emergency_alert
    • Detect emergency alert signals in the audio track(s).

deep_properties, video:

  • track_selector
    • Mechanism to select a specific video track for analysis.
  • settings
    • Settings for the comparison file, such as filters to be applied prior to comparison.
  • black
  • black_borders
    • Detects cropping, such as letter- or pillarboxing.
  • interlacing
  • levels
    • Analyzes the video and detects min/max Y,Cb,Cr etc.
  • blockiness
  • hdr_stats
    • Detects HDR10 signal levels.
  • complexity
    • Produces a measurement for how complex the content is over time.
  • content_variance
    • Produces a measurement for how much the content is changing over time.
  • scene_change_score
    • Detects scene changes probabilities.
  • pse
  • compressed_stats
  • compressed_quality
    • Determines, for example, PQ values of the underlying bitstream.
  • ssim(comparative)
  • ms_ssim (comparative)
    • Determines the MS-SSIM value between an asset and a reference file.
  • psnr (comparative)
  • vmaf (comparative)
    • Uses the Netflix Video Multi-Method Assessment Fusion (VMAF) methods to assess the quality of an asset compared with a reference file.
    • Tutorial: VMAF Analysis Tutorial

Reports

Base report

NOTE: In Hybrik version 1.217, we introduced a change to the structure of the analyzer results that have timed events. The new result version can be activated by setting "response_version": 2 in your analyzer's options. The default version will become version 2 in a future release.

An analyze_task can be set to generate a PDF report including the base analysis results by adding a "reports" object in the payload as shown below. You can decide whether to skip report generation based on task success/failure by setting the create_condition to one of:

  • always
  • on_failure
  • on_success
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"report": {
"create_condition": "always",
"file_pattern": "{source_basename}_analyze_report.pdf",
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"options": {
"report_version": "v3.0"
}
}
...

File Property Specific Report

Certain analysis types generate additional results that can be written to a file. These reports can be formatted as PDF, CSV or JSON, and the format is set by the defined file extension in the analysis object as follows:

{
"deep_properties": {
"video": {
"ssim": {
"enabled": true,
"results_file": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/reports"
},
"file_pattern": "{source_basename}.csv"
}
}
}
}
}

Note about Collected Data

When an analyze task is performed in a job, the information collected is stored temporarily for downstream tasks in the job. If any further task results in an error (reported as "failed" in the UI), the collected results data will no longer be available.

Examples