Getting Started
Introduction
Sample Code Will Be Over Here
Welcome to Hybrik -- Dolby's cloud media processing service. Every element of the Hybrik workflow can be managed through Hybrik's RESTful API. This includes transcoding, quality control, data transfers, etc. Even complex workflows with conditional execution and scripting can be designed and implemented using the Hybrik API. When defining transcodes through the API, users can reference pre-configured presets or explicitly define the complete encode pipeline. The API uses JSON structures for the definition of each element.
All Hybrik API submissions are via the HTTP commands POST, PUT, GET, and DELETE. The API uses HTTP authentication for permitting API access, and a user-specific login call which returns an expiring token. This token needs to be passed into all API calls as part of the header.
A typical API session to submit and track a transcoding job would look like this:
- Step 1 - Authenticate User (returns security token used in following calls)
- Step 2 - Create Job (submits your job in JSON format)
- Step 3 - Get Job Info (tracks status of your job)
- Step 4 - Get Job Result (complete details of your job after completion or failure)
At the bottom of this document are downloadable samples for both JSON jobs as well as JavaScript libraries for easily incorporating the Hybrik API into your projects.
REST Arguments
The Hybrik REST API follows REST conventions regarding the placement of arguments in a query string or in a request body. Some limitations apply however: The maximum number of array elements which can be passed via a query string is 250. If any array in your request is exceeding this length, the argument must be passed in the request body.
Generally, Hybrik will attempt to parse arguments from both locations, to overcome issues with maximum url length etc.
Jobs and Tasks
Example Job JSON
{
"name": "Hybrik API Example#1 - simple transcode",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"task": {
"retry_method": "fail"
},
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/example1"
},
"targets": [
{
"file_pattern": "%s.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 640,
"height": 360,
"frame_rate": 23.976,
"bitrate_kb": 600
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
A Job is defined in JSON notation, and tells the Hybrik service precisely how to execute that Job. Note that a Job is made up of one or more Tasks, and Tasks are then distributed across the available machines. Therefore a Job may end up executing on many different machines. Both Jobs and Tasks can have varying priorities within the system to allow you to finely control performance and latency. Tasks may even be set to retry if they fail, which allows you to manage intermittent error conditions like network accessibility. Depending on your workflow, some Tasks may not get executed. For example, a Job execution might look like this:
- Task 1 - Analyze a source file
- Task 2 - Analyze the source file
- Task 3 - Verify that the audio levels match the input requirements
- if pass, go to Task 4
- if fail, go to Task 7
- Task 4 - Transcode source file to MP4
- Task 5 - Copy result to CDN
- Task 6 - Notify user that result is live and stop here
- Task 7 - Copy failed source file from Task 2 to new location
- Task 8 - Notify QC department of failure
If the file passes the QC test, Tasks 1, 2, 3, 4, 5, and 6 execute. If the file fails the QC test, Tasks 1, 2, 3, 7, and 8 execute. A Job specifies not only which Tasks will be executed, but also how those Tasks are connected to each other.
The basic structure of a Job JSON looks like this:
- Job Info
- Name
- Priority
- Etc.
- Task Element Info
- Task 1
- Task 2
- Etc.
- Connection Info
- Task 1 is connected to Task 2
- Task 2 is connected to Task 3 on success, but Task 4 on failure
- Etc.
A sample JSON file describing a Job is shown on the right. This Job takes a source file from an Amazon S3 location (s3://hybrik-examples/public/sources/sample1.mp4) and puts the transcode result into a different S3 location (s3://hybrik-examples/public/output/example1). The transcode video parameters are set to use the h.264 codec, with a width of 640 pixels, a height of 360 pixels, a frame rate of 23.976 frames per second, and a bitrate of 600kb/sec. The audio format is set to use the HE-AAC V2 codec, with 2 channels of audio, at a sample rate of 44.1kHz and a bitrate of 128kb/sec.
Jobs may be simple and define only a Source and Transcode element, or they may be very complex with dozens of different tasks and conditional branching.
User Authentication
Version and Compliance
The Hybrik API version is specified in the request URI (eg. "v1" in "https://api-demo.hybrik.com/v1/jobs"). All calls must include this version number.
In addition, an API compliance date must be passed with each call. This is passed via the 'X-Hybrik-Compliance' header and has the format 'YYYYMMDD' (eg. 20190406). The compliance date essentially states "I, the client application, comply to the API up to this date". If we introduce changes to the API, we will ensure that the response is still valid for the client based on this compliance date value. That way you don't need to be worried about changes to our API. Just tell us when you started using it and we will conform to your usage.
The entry point for testing the API will be https://api-demo.hybrik.com/v1. Note that once you are a customer of Hybrik, you will be given a different API entry point to operate against.
Authentication Login
Logging in to the Hybrik API
Hybrik uses 2 login mechanisms: basic http authentication for permitting API access, and a user specific login call, returning an expiring token. This token shall be passed to all api calls in the 'X-Hybrik-Sapiauth' header.
Expiration time for the token is 30 minutes - however, it will be auto-extended (in 30 minute intervals) on successful API calls. After expiration, a new login call, returning a new token, is required.
The credentials for the basic http authentication are unique per customer account. That authentication does not permit access to any customer resources, it only safe-guards access to the api itself. The Basic HTTP Auth credentials can be obtained by the account owner via the Hybrik web interface in the Account > Info page.
IMPORTANT: For security reasons, the credentials to be used for API access cannot be the same as those of a web interface user account. You must instead use an API User, which can be created in the Hybrik web interface in the Account > API Users page. The Account > Info page also has more information about this, in the bottom right of the page.
Do not share your API credentials or Basic HTTP Authentication credentials with anyone.
The example JavaScript code on the right is based on Node.js and uses the "hybrik_connector.js" module included in the sample code. The hybrik_connector module can be used to create an API access object and submit http-based calls to the Hybrik API. The basic functions are:
Generate an API access object
var hybrik_api = HybrikAPI(api_url, compliance_date, oapi_key, oapi_secret, user_key, user_secret)
Connect to the API
hybrik_api.connect()
*Submit an API call *
hybrik_api.call_api(http_method, api_method, url_params, body_params)
The sample code can be reviewed in the section titled: API Sample JavaScript
HTTP Request
// Construct an API access object, connect to the API, and then submit a job
var hybrik_api = new HybrikAPI(
api_config.HYBRIK_URL,
api_config.HYBRIK_COMPLIANCE_DATE,
api_config.HYBRIK_OAPI_KEY,
api_config.HYBRIK_OAPI_SECRET,
api_config.HYBRIK_AUTH_KEY,
api_config.HYBRIK_AUTH_SECRET
);
// connect to the API
hybrik_api.connect()
.then(function () {
// submit the job by POSTing the '/jobs' command
return hybrik_api.call_api('POST', '/jobs', null, theJob)
.then(function (response) {
console.log('Job ID: ' + response.id);
return response.id;
})
// error handling and such here
$ curl -u OAPI_KEY:OAPI_SECRET -X POST https://api-demo.hybrik.com/v1/login \
-d '{
"auth_key": "john.doe@customer.hybrik.com",
"auth_secret": "very secret password"
}' \
-H "Content-Type: application/json" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Login to the Hybrik API. Requires using basic authentication with the API credentials provided by Hybrik
POST /login
Required Parameters
Name | Type | Description |
---|---|---|
auth_key | string | API User name or email address. Length: 0..512 |
auth_secret | string | API User password. Length: 0..512 |
Job Management
These API calls allow for the creation and management of Hybrik jobs. This includes getting information about a specific job or getting a list of jobs.
Create Job
//Submit a job
hybrik_api.call_api('POST', '/jobs', null, {
name: "My job",
priority: 100,
tags: ["my_tag", "my_other_tag"],
payload: {
// this is the job JSON
}
})
$ curl -u OAPI_KEY:OAPI_SECRET -X POST https://api-demo.hybrik.com/v1/jobs \
-d '{
"name": "Fifth Element, Web Streaming",
"user_tag": "myjob_012345",
"schema": "hybrik",
"payload": "<job payload>",
"priority": 100,
"expiration": 1440,
"task_tags": [
"high_performance",
"us-west-1"
],
"task_retry": {
"count": 3,
"delay_sec": 180
}
}' \
-H "Content-Type: application/json" \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"id": "123456789"
}
Create a new job.
HTTP Request
POST /jobs
Required Parameters
Name | Type | Description |
---|---|---|
name | string | The visible name of a jobs |
payload | string | Depending on the schema, this must be a JSON object (schema = 'hybrik') or a serialized XML document (schema = 'rhozet') Length: 0..128000 |
schema | string | Select the api schema compatibility of the job payload. Defaults to "hybrik" if not specified. one of: "hybrik" or "rhozet" or "api_test" |
Optional Parameters
Name | Type | Description |
---|---|---|
expiration | integer | Expiration (in minutes) of the job. A completed job will expire and be deleted after [expiration] minutes. Default is 30 days. default: 43200 Range: value <= 259200 |
priority | integer | priority (1: lowest, 254: highest) of a job default: 100 Range: 1 <= value <= 254 |
task_retry:count | integer | The number of times to attempt to retry the task if there is a failure. default: 0 |
task_retry:delay_sec | integer | The number of seconds to wait before a retry attempt. default: 45 |
task_tags | array | The tags all the tasks of this job will have. Note that a render node needs to provide these tags for a task to be executed on that node. |
user_tag | nullable string | An optional, free-form, persistent identifier for this job. Intent is to provide a container for a machine trackable, user specified, identifier. For human readable identifiers, please use the name field of a job. Hybrik will not verify this identifier for uniqueness. Length: 0..192 |
Update Job
//Modify Job #12345
hybrik_api.call_api('PUT', '/jobs/12345', null, {
name: "My new name",
priority: 250,
expiration: 1440
})
$ curl -u OAPI_KEY:OAPI_SECRET -X PUT https://api-demo.hybrik.com/v1/jobs/$JOB_ID \
-d '{
"name": "Fifth Element, Web Streaming",
"user_tag": "myjob_012345",
"priority": 100,
"expiration": 1440
}' \
-H "Content-Type: application/json" \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"id": "12345"
}
Modifies an existing job.
HTTP Request
PUT /jobs/{job_id}
Optional Parameters
Name | Type | Description |
---|---|---|
expiration | integer | The expiration (in minutes) of the job. A completed job will expire and be deleted after [expiration] minutes. Default is 30 days. default: 43200 Range: value <= 259200 |
name | string | The visible name of a job. |
priority | integer | The job priority (1: lowest, 254: highest). default: 100 Range: 1 <= value <= 254 |
user_tag | nullable string | An optional, free-form, persistent identifier for this job. Intent is to provide a container for a machine trackable, user specified, identifier. For human readable identifiers, please use the name field of a job. Hybrik will not verify this identifier for uniqueness. Length: 0..192 |
Stop Job
//Stop Job #12345
hybrik_api.call_api('PUT', '/jobs/12345/stop')
$ curl -u OAPI_KEY:OAPI_SECRET -X PUT https://api-demo.hybrik.com/v1/jobs/$JOB_ID/stop \
-H "Content-Type: application/json" \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"id": "12345"
}
Stops a job.
HTTP Request
PUT /jobs/{job_id}/stop
Delete Job
//Delete Job #12345
hybrik_api.call_api('DELETE', '/jobs/12345')
$ curl -u OAPI_KEY:OAPI_SECRET -X DELETE https://api-demo.hybrik.com/v1/jobs/$JOB_ID \
-H "Content-Type: application/json" \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"id": "12345"
}
Delete an existing job.
HTTP Request
DELETE /jobs/{job_id}
Delete Jobs
//Delete Jobs 12345, 56789, and 34567
hybrik_api.call_api('DELETE', '/jobs', null, {
ids: [12345, 56789, 34567]
})
$ curl -u OAPI_KEY:OAPI_SECRET -X DELETE https://api-demo.hybrik.com/v1/jobs \
-d '{
"ids": [
"12344346",
"12344347"
]
}' \
-H "Content-Type: application/json" \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"items": [
"12345",
"56789",
"34567"
]
}
Delete a set of existing jobs.
HTTP Request
DELETE /jobs
Required Parameters
Name | Type | Description |
---|---|---|
ids | array | An array of the job numbers to be deleted. Example: ["12345", "12346"] |
Get Job Definition
//Get job definition for Job #12345
hybrik_api.call_api('GET', '/jobs/12345/definition')
$ curl -u OAPI_KEY:OAPI_SECRET https://api-demo.hybrik.com/v1/jobs/$JOB_ID/definition \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"name": "My Job",
"priority": 100,
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_folder/my_file.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"task": {
"retry_method": "fail",
"tags": [
]
},
"payload": {
// transcode payload
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
Get the job structure in the Hybrik schema notation.
HTTP Request
GET /jobs/{job_id}/definition
Get Job Result
//Get job result for Job #12345
hybrik_api.call_api('GET', '/jobs/12345/result')
$ curl -u OAPI_KEY:OAPI_SECRET https://api-demo.hybrik.com/v1/jobs/$JOB_ID/result \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"errors": [],
"job": {
"id": 476993,
"is_api_job": 1,
"priority": 100,
"creation_time": "2016-11-11T23:30:51.000Z",
"expiration_time": "2016-12-11T23:30:52.000Z",
"user_tag": null,
"status": "completed",
"render_status": "completed",
"task_count": 1,
"progress": 100,
"name": "Transcode Job: s3:/my_bucket/my_folder/my_file.mp4",
"first_started": "2016-11-11T23:31:21.000Z",
"last_completed": "2016-11-11T23:31:29.000Z"
},
"tasks": [
{
"id": 2196469,
"priority": 100,
"name": "Transcode Job: s3:/my_bucket/my_folder/my_file.mp4",
"retry_count": -1,
"status": "completed",
"assigned": "2016-11-11T23:30:51.000Z",
"completed": "2016-11-11T23:30:51.000Z",
/* information about each task here */
Get the job result after completion or failure.
HTTP Request
GET /jobs/{job_id}/result
Get Job Info
//Get job info for Job #12345
hybrik_api.call_api('GET', '/jobs/12345/info', {
fields: [
"id",
"name",
"progress",
"status",
"start_time",
"end_time"
]
})
$ curl -u OAPI_KEY:OAPI_SECRET https://api-demo.hybrik.com/v1/jobs/$JOB_ID/info \
-G \
-d fields[]=id \
-d fields[]=progress \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"id": "12345",
"name": "Fifth Element, Web Streaming",
"progress": 42,
"status": "running",
"start_time": "2016-01-01T12:00:00Z",
"end_time": "2016-01-01T12:10:30Z"
}
Get information about a specific job.
HTTP Request
GET /jobs/{job_id}/info
Optional Parameters
Name | Type | Description |
---|---|---|
fields | array | Specify which fields to include in the response. Possible values are: id, name, user_tag, priority, status, substatus, progress, creation_time, start_time, end_time, expiration_time, and error. Example: ["id", "progress"] |
List Jobs
//Get a list of jobs from #12345 to #12347
hybrik_api.call_api('GET', '/jobs/info', {
ids: [
12345,
12346,
12347
],
fields: [
"id",
"name",
"progress",
"status",
"start_time",
"end_time"
]
})
$ curl -u OAPI_KEY:OAPI_SECRET https://api-demo.hybrik.com/v1/jobs/info \
-G \
-d ids[]=12345 \
-d ids[]=12346 \
-d ids[]=12347 \
-d "filters[0][field]=status&filters[0][values][]=active&filters[1][field]=status&filters[1][values][]=completed" \
-d fields[]=id \
-d fields[]=name \
-d fields[]=status \
-d fields[]=start_time \
-d fields[]=end_time \
-d fields[]=progress \
-d sort_field=id \
-d order=asc \
-d skip=0 \
-d take=100 \
-H "X-Hybrik-Sapiauth: api_auth_token" \
-H "X-Hybrik-Compliance: YYYYMMDD"
Response Object
{
"items": [
{
"id": "12345",
"name": "My first job",
"status": "completed",
"start_time": "2020-06-11T23:30:37.000Z",
"end_time": "2020-06-11T23:30:45.000Z",
"progress": 100
},
{
"id": "12346",
"name": "My second job",
"status": "completed",
"start_time": "2020-06-11T23:30:37.000Z",
"end_time": "2020-06-11T23:30:40.000Z",
"progress": 100
},
{
"id": "12347",
"name": "My third job",
"status": "active",
"start_time": "2020-06-11T23:31:21.000Z",
"end_time": "2020-06-11T23:31:29.000Z",
"progress": 62
}
]
}
Provide a filtered, ordered, list of job information records.
HTTP Request
GET /jobs/info
Optional Parameters
Name | Type | Description |
---|---|---|
ids | array | A list of job ids, filtering the records to be returned. default: [] |
fields | array | An array specifying which fields to include in the response. Possible values are: id, name, user_tag, priority, status, substatus, progress, start_time, creation_time, end_time, expiration_time, error default: ["id"] |
filters/field | string | one of: "name" or "user_tag" or "status" or "substatus" |
filters/values | array | The job filter match values. |
order | string | The sort order of returned jobs default: "asc" one of: "asc" or "desc" |
skip | integer | Specify number of records to omit in the result list default: 0 |
sort_field | string | The sort field of returned jobs. default: "id" one of: "id" or "name" or "user_tag" or "priority" or "status" or "substatus" or "progress" or "creation_time" or "end_time" or "start_time" or "expiration_time" |
take | integer | Specify number of records to retrieve. default: 100 Range: value <= 1000 |
Count Jobs
//Get a count of all the jobs ("queued", "active", "completed", "failed")
hybrik_api.call_api('GET', '/jobs/count')
$ curl -u OAPI_KEY:OAPI_SECRET https://api-demo.hybrik.com/v1/curl -u OAPI_KEY:OAPI_SECRET https://api-demo.hybrik.com/v1/jobs/count \
-G \
-d filters[0][field]=status \
-d filters[0][values][]=active \
-H "X-Hybrik-Sapiauth: xxx" \
-H "X-Hybrik-Compliance: 20190101"
Response Object
{
"count": 107
}
Provides the count of jobs of the specified type(s).
HTTP Request
GET /jobs/count
Optional Parameters
Name | Type | Description |
---|---|---|
filters/field | string | The type of field to filter on. one of: "name" or "user_tag" or "status" or "substatus" |
filters/values | array | Filter match values. |
Job Task List
//Get a list of the tasks associated with job #12345
hybrik_api.call_api('GET', '/jobs/12345/tasks', {
fields: [
"id",
"name",
"status",
"progress",
"machine_uid"
]
})
$ curl -u OAPI_KEY:OAPI_SECRET https://api-demo.hybrik.com/v1/jobs/$JOB_ID/tasks \
-G \
-d fields[]=id \
-d fields[]=progress \
-H "X-Hybrik-Sapiauth: api_auth_token"
Response Object
{
"items": [
{
"id": "2196469",
"name": "API Job Trigger - My Job",
"status": "completed",
"progress": 100
},
{
"id": "2196470",
"name": "Transcode - My Job",
"status": "completed",
"progress": 100,
"machine_uid": "i-06a65147792f66a44"
},
{
"id": "2196471",
"name": "Anlayze - My Job",
"status": "running",
"progress": 67,
"machine_uid": "i-06a4a678234e46"
},
{
"id": "2196472",
"name": "QC - My Job",
"status": "queued",
"progress": 0,
"machine_uid": "i-34556a743357e53"
}
]
}
Provide a list of job tasks.
HTTP Request
GET /jobs/{job_id}/tasks
Optional Parameters
Name | Type | Description |
---|---|---|
fields | array | Specify which fields to include in the response. Possible values are: id, name, kind, status, progress, creation_time, start_time, end_time, machine_uid, and error. default: ["id"] |
Job JSON
Overview
A Hybrik Job is composed of Elements and Connections. The relationship between these is as follows:
- A Job defines a complete media processing operation. It specifies the source file(s) and the various processing steps, including transcode, QC, notification, etc.
- Elements are the steps of a Job. An Element can be a processing task like transcoding or a logic step like an if/then choice.
- Connections describe how Elements are connected
When Hybrik is processing a Job, it breaks it down into Tasks. Generally, a Task has a one-to-one relationship with a Job Element. There are some Elements, however, that can be broken into smaller Tasks. For example, a transcode can be broken into many Tasks, where each Task is rendering a small section of the overall transcode.
This reference contains descriptions of all of the JSON objects that can be used in Hybrik. Whenever an object contains other objects, there is a hyperlink to the sub-object.
Jobs
Job JSON Example
{
"name": "Hybrik API Example#1",
"priority": 100,
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/transcode/example1"
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 640,
"height": 360,
"frame_rate": 23.976,
"bitrate_kb": 600
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
Name | Type | Description |
---|---|---|
name | string | A name for the job. This will be displayed in the Job window. It does not have to be unique, but it helps to find jobs when they are given searchable names. |
payload | object | The job payload contains all of the structural information about the job. The payload consists of an elements array and a connections array. The elements array defines the various job tasks and the connections array defines how these elements are connected. |
schema | string | Optional. Hybrik will be supporting some third-party job schemas, which can be specified in this string. The default is "hybrik". |
priority | integer | Optional. The priority of a job (1 = lowest, 254 = highest) default: 100 range: 1 <= value <= 254 |
user_tag | string | Optional. The purpose of the user_tag is to provide a machine-trackable, user-specified, identifier. For a human readable identifier, please use the name field of a job. Hybrik will not verify the uniqueness of this identifier. length: 0..192 |
expiration | integer | Optional. Expiration (in minutes) of the job. A completed job will expire and be deleted after [expiration] minutes. Default is 30 days. default: 43200 range: value <= 259200 |
task_retry | object | Optional. Object defining the default task retry behavior of all tasks in this job. Task retry determines the maximum number of times to retry a task as well as the delay between each attempt. |
task_tags | array | Optional. This array contains the task_tags that all the tasks of this job will have. When a task goes to be executed, it will only be executed on machine nodes that have a matching task_tag. For example, if a task is tagged with the tag "high_performance" then it will only run on machines that are also tagged "high_performance" |
definitions | object | Optional. Global string replacements can be defined in this section. Anything in the Job JSON that is enclosed with double parentheses such as {{to_be_replaced}} will be replaced. |
Job Payload
Example Job Payload Object
{
"name": "My Job Name",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload":{
/* source payload */
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"task": {
"retry_method": "fail"
},
"payload": {
/* transcode task payload */
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
Name | Type | Description |
---|---|---|
elements | array | An array defining the various job elements. Each element object has a uid that uniquely identifies it, a kind that specifies the type of object, a task object that defines the generic task behavior, and a payload. |
connections | array | An array defining how the various task elements are connected. |
Definitions
Example Job Definitions Object
{
"definitions": {
"source": "s3://my_bucket/my_folder/my_file.mp4",
"destination": "s3://my_bucket/output_folder",
"video_basics": {
"codec": "h264",
"profile": "high",
"level": "3.0",
"frame_rate": "24000/1001"
},
"audio_basics": {
"codec": "aac_lc",
"sample_rate": 48000,
"bitrate_kb": 128
}
},
"name": "My job: {{source}}",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "{{source}}"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "{{destination}}"
},
"targets": [
{
"file_pattern": "{source_basename}_1200kbps.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"$inherits": "video_basics",
"width": 1280,
"height": 720,
"bitrate_kb": 1200
},
"audio": [
{
"$inherits": "audio_basics",
"channels": 2
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
The Definitions object allows you to create global string replacements in the Job JSON. This can be useful for simplifying editing or parameter replacements in the JSON. If a string such as "my_string" is defined in the Definitions section, then Hybrik will replace every occurrence of {{my_string}} in the rest of the Job JSON with the value of "my_string" in the Definitions section. Using the Job Definition example, every occurrence of {{destination}} in the Job JSON would be replaced with the path defined at the top. If you need to insert the contents of an object in the definitions into an object, use the "$inherits" label. This can be particularly helpful when dealing with multi-layer outputs, since a setting can be changed in one location and affect all of the output layers. When submitting jobs via the API, you should consider putting all of the parameters that will be replaced by the automation system into the definition section. This makes it instantly visible which parameters need to be replaced and makes swapping out parameters simple. It is much easier to reference definitions.bitrate than it is to reference elements[1].transcode.payload.targets[0].video.bitrate_kb!
Elements Array
The contents of the Elements Array are the components of your workflow. The available elements include: Source, Transcode, Analyze, QC, Notify, Copy, Package, Folder Enum, Watchfolder, DPP Package, BIF Creator, and Script. You can have multiple versions of the same type of element. For example, you could have 2 copies of the Analyze and QC tasks -- one for checking your incoming data and one for checking your outgoing results. These could be checking completely different parameters. Each Element has a UID that uniquely identifies it in the workflow.
Example Elements Array
{
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/transcode/example1"
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 640,
"height": 360,
"frame_rate": 23.976,
"bitrate_kb": 600
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
}
]
}
}
]
}
Name | Type | Description |
---|---|---|
uid | string | A unique ID for the task element. An example would be "source_file" or "transcode_task_1". This UID allows the task to be uniquely referenced by other parts of the job. |
kind | enum source transcode copy analyze qc notify package folder_enum watchfolder dpp_packager bif_creator script |
The type of task element. |
task | object | An object describing the generic task behavior, such as priority and number of retries. |
payload | object | The payload describes the parameters of the specific element. Only one type of payload is allowed. The options are: source transcode copy analyze qc notify package folder_enum watchfolder dpp_packager bif_creator script |
Connections Array
The Connections Array tells Hybrik how to "connect" the Elements in your workflow. Some items will execute in series, while others can execute in parallel. Some tasks will execute when the previous task completes successfully, and some you will only want to execute when the previous task fails. This sequencing and flow control are managed by the Connections Array. The Elements are referred to by their UID.
Example Connections Array
{
"from": [
{
"element": "transcode_task"
}
],
"to": {
"success": [
{
"element": "copy_task"
}
],
"error": [
{
"element": "error_notify"
}
]
}
}
Name | Type | Description |
---|---|---|
from | array | An array that lists each Element that is being connected to this item. |
to | object | An object that defines where this Element is connected to. There are a "success" and "error" array components. Only one of these sets of connections will be triggered upon completion. |
Task Object
Example Task Object
{
"uid": "transcode_task",
"kind": "transcode",
"task": {
"retry_method": "fail",
"name": "Test Transcode For Distribution",
"priority": 200
},
"payload": {
/* transcode payload goes here */
}
}
Name | Type | Description |
---|---|---|
name | string | Optional. A name for the task. This will be displayed in the Task window. It does not have to be unique, but it helps to search for specific tasks when they are given unique names. If left blank, Hybrik will automatically generate a task name based on the job name. |
tags | array | Optional. A list of job/task tags. Tags are custom strings that are used to match jobs and tasks to specific computing groups. |
retry_method | enum fail retry |
Optional. A task can be retried automatically. If this is set to "retry", then the retry object must be defined. default: fail |
retry | object | Defines how many times a retry should be attempted and how many seconds to wait between each attempt. |
flags | object | Optionally flags the job for special types of processing. |
priority | integer | Optional. If undefined, all tasks take on the priority of the parent job. The priority of a task (1 = lowest, 254 = highest) |
comment | string | Optional. The user-defined comment about a task. This is only accessible via the API. |
extended | object | Optional. The extended properties for a task. |
Retry Object
Example Retry Object
{
"task": {
"retry_method": "retry",
"retry": {
"count": 2,
"delay_sec": 30
}
}
}
Name | Type | Description |
---|---|---|
count | integer | Maximum number of retries.maximum: 5 |
delay_sec | integer | Optional. Number of seconds to wait after a failure until a retry is attempted.maximum: 3600 default: 45 |
Flags Object
Example Flags Object
{
"task": {
"flags": {
"split_task": "smart"
}
}
}
Name | Type | Description |
---|---|---|
split_task | enum disabled smart aggressive |
Tasks with multiple output targets can be split across multiple machines. "Aggressive" will assign one target per machine. "Smart" will group outputs to achieve roughly equal processing time across targets. The default is "disabled", which will assign all targets to one machine. |
split_task_max_machines | integer | The maximum number of machines to split tasks across. |
skip_validation | boolean | This will disable job JSON validation at execution time. Used for very low latency operation. Default is "false". |
Source Element
Source
The Source object defines the source of your workflow. In its simplest form, the Source object points to a single file. But your Source can actually be much more complex. For example, your source may consist of multiple components -- video, audio tracks, and subtitle files. Or your source may actually be multiple sources all being stitched into a final output. Or you may have a source file that has 6 discrete mono audio tracks that you need to treat like 6 channels of a single surround track. The Source object lets you handle all of these scenarios. The Sources object defines what files to use, where to find them, how to access them, and how to assemble them.
Example Source Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
}
Name | Type | Description |
---|---|---|
uid | string | A unique identifier for this element. |
kind | enum asset_url asset_complex |
The type of source file. An "asset_url" is a single element, whereas an "asset_complex is an asset made up of multiple elements. |
payload | object | The payload for the particular source type. The payload types are: asset_url asset_complex |
Asset URL
Example Asset URL Payload Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
}
Name | Type | Description |
---|---|---|
storage_provider | enum s3 gs ftp sftp http swift swiftstack akamains relative |
The type of file access. |
url | string | The complete URL for the file location. |
access | anyOf s3 gs ftp sftp http swift swiftstack akamains |
This contains credentials granting access to the location. |
Asset Complex
Example Asset_Complex Object
{
"uid": "my_complex_source",
"kind": "source",
"payload": {
"kind": "asset_complex",
"payload": {
"kind": "sequence",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"asset_versions": [
{
"version_uid": "intro",
"asset_components": [
{
"kind": "name",
"component_uid": "file1",
"name": "intro_video.mp4",
"trim": {
"inpoint_sec": 0,
"duration_sec": 10
}
}
]
},
{
"version_uid": "main",
"asset_components": [
{
"kind": "name",
"component_uid": "file2",
"name": "main_video.mov"
}
]
}
]
}
}
}
Asset_complex is used for defining multi-part sources. These can include multiple video, audio, and subtitle files. The JSON example on the right shows stitching 10 seconds of one video with another video. In the example, you will see two arrays, one called "asset_versions" and one called "asset_components". These exist so that you can create much more complex operations. Each element of the "asset_versions" array specifies one source asset that will be used. Because a source may not actually be a single file but rather a collection of video tracks, audio tracks, and subtitle tracks, each element of the "asset_versions" array contains an "asset_components" array. The "asset_components" array is where you would specify the various components.
You will also notice that each element in the "asset_components" array has a "kind" value. In our previous example, we used "name" as the value for "kind". Other values include "list", "template", "image_sequence", "binary_sequence", and "asset_sequence". These are used when you have sources of different types. For example, if you had an asset that consisted of thousands of .png files, called "animation0001.png", "animation0002.png", etc. that you wanted to transcode into a single output, you could specify it as shown on the right.
Example Still Image Source Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_complex",
"payload": {
"asset_versions": [
{
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"asset_components": [
{
"kind": "image_sequence",
"image_sequence": {
"base": "animation%04d.png"
}
}
]
}
]
}
}
}
For a more detailed overview of complex assets, please see: https://tutorials.hybrik.com/sources_asset_complex/
Transcode Task
Transcode
Example Transcode Object
{
"name": "Hybrik Transcode Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264",
"profile": "high",
"level": "4.0",
"frame_rate": 23.976
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 128
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
The Transcode Task allows you to specify the type(s) of transcode that you would like to perform in a job. A single Transcode Task can specify more than one output Target. For example, a single Transcode Task can create all 10 layers of an HLS package in a single task. All of the transcode targets will be processed at the same time – meaning there will be one decode pipeline feeding all of the outputs. Therefore if you have 10 output targets, the fastest and the slowest will both complete at the same time, since they are being fed by the same decode pipeline. You can tell Hybrik to break up a Transcode Task into separate tasks to be run on multiple machines. Read our tutorial to learn more about the Transcode Task in Hybrik.
transcode
Name | Type | Description |
---|---|---|
location | object | The base target location. Locations specified in deeper branches of this JSON will override this. |
options | object | Options for this transcode. Includes source delete option and source pre-fetch. |
source_pipeline | object | The source_pipeline sets filtering or segmenting prior to executing the transcode. |
watermarking | array | An array of objects defining the types of watermarks to be included in the output. |
support_files | array | Support files referenced inside of container/video/audio, for example in x265_options. |
temp_location | object | Specify a location for temporary files (typically multi-pass). |
temp_file_prefix | string | Specify a prefix for every temp file (for shared folder/location use across various tasks). |
targets | array | An array of target outputs. Each target specifies a location, container, video, and audio properties. |
Options
Example Options Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"options": {
"delete_sources": true
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
},
"video": {
},
"audio": [
{
}
]
}
]
}
}
transcode.options
Name | Type | Description |
---|---|---|
delete_sources | boolean |
Set to delete the task's source files on successful execution. |
result | object | Options to modify how verbose MediaInfo results should be. |
Result
Example Result Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"options": {
"delete_sources": true,
"result": {
"mov_atom_descriptor_style": "full"
}
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
},
"video": {
},
"audio": [
{
}
]
}
]
}
}
transcode.options.result
Name | Type | Description |
---|---|---|
mov_atom_descriptor_style | enum none condensed by_track full |
"none": do not list atoms. "condensed": pick the most important atoms and list linearly with the belonging tracks. "by_track": show the full hierarchy but list along with tracks. "full": show the full file hierarchy in the asset element. |
Source_pipeline
Example Source_pipeline Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"source_pipeline": {
"options": {
"force_ffr": true,
"max_decode_errors": 10
},
"segmented_rendering": {
"duration_sec": 180
}
},
"targets": [
{
"file_pattern": "{source_basename}_output.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"bitrate_kb": 128
}
]
}
]
}
}
The source_pipeline allows the modification of the source prior to beginning the transcode. An example use case would be where you had 10 output targets, and you wanted each of them to have a logo imprinted in the same relative location. You could apply the imprint filter on the source prior to the creation of the outputs. Another example would be for splitting the processing of an output across many machine. By specifying segmented rendering in the source_pipeline, different segments of the source will be sent to different target machines.
transcode.source_pipeline
Name | Type | Description |
---|---|---|
trim | anyOf by_sec_in_out by_sec_in_dur by_timecode by_asset_timecode by_frame_nr by_section_nr by_media_track by_nothing |
Object defining the type of trim operation to perform on an asset. |
ffmpeg_source_args | string |
The FFmpeg source string to be applied to the source file. Use {source_url} within this string to insert the source file name(s). |
options | object | Options to be used during the decoding of the source. |
accelerated_prores | boolean |
Use accelerated Apple ProRes decoder. |
segmented_rendering | object | Segmented rendering parameters. |
manifest_decode_strategy | enum simple reject_complex reject_master_playlist |
Defines the level of complexity allowed when using a manifest as a source. |
chroma_dither_algorithm | enum none bayer ed a_dither x_dither |
The dithering algorithm to use for color conversions. |
scaler | object | The type of function to be used in scaling operations. |
frame_rate | number string |
The framerate to use for this transcode. If not specified, the source framerate will be used. |
Options
Example Options Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"source_pipeline": {
"options": {
"force_ffr": true,
"max_decode_errors": 100,
"max_sequential_decode_errors": 10,
"resolve_manifest": true
},
"segmented_rendering": {
"duration_sec": 180
}
},
"targets": [
{
"file_pattern": "{source_basename}_output.mp4",
"container": {
},
"video": {
},
"audio": [
{
}
]
}
]
}
}
transcode.source_pipeline.options
Name | Type | Description |
---|---|---|
force_ffr | boolean |
Force Fixed Frame Rate - even if the source file is detected as a variable frame rate source, treat it as a fixed framerate source. |
wait_for_source_timeout_sec | number | Set the maximum time for waiting to access the source data. This can be used to handle data that is in transit. |
max_decode_errors | integer | The maximum number of decode errors to allow. Normally, decode errors cause job failure, but there can be situations where a more flexible approach is desired. |
max_sequential_decode_errors | integer | The maximum number of sequential errors to allow during decode. This can be used in combination with max_decode_errors to set bounds on allowable errors in the source. |
no_rewind | boolean | Certain files may generate A/V sync issues when rewinding, for example after a pre-analysis. This will enforce a reset instead of rewinding. |
no_seek | boolean | Certain files should never be seeked because of potentially occurring precision issues. |
low_latency | boolean | Allows files to be loaded in low latency mode, meaning that there will be no analysis at startup. |
cache_ttl | integer | If a render node is allowed to cache this file, this will set the Time To Live (ttl). If not set (or set to 0) the file will not be cached but re-obtained whenever required. |
index_location | object | Specify a location for the media index file. |
auto_generate_silence_tracks | boolean | If this is set to true, and a video only file is passed in, the engine will handle the creation of auto-generated silent audio tracks as required. |
resolve_manifest | boolean | If this is set to true, the file is considered a manifest. The media files referred to in the manifest will be taken as the real source. |
master_manifest | object | Master file to be used for manifest resolution (for example IMF CPLs}. |
master_manifests | array | An array of master files to be used for manifest resolution (for example IMF CPLs). |
ignore_errors | array | Attempt to ignore input errors of the specified types. Error type options include invalid_argument and non_monotonic_dts. |
auto_offset_sources | boolean | If this is set to true, the source is considered starting with PTS 0 regardless of the actual PTS. |
use_default_rgb2yuv_coefficients | boolean | If this is set to true, the source's color matrix is ignored. |
copy_global_metadata | boolean |
This flag indicates whether global metadata should be copied from a source file to a target output. If the flag is set to true and there are more than one input file, global metadata is copied from the first one. |
demux_src_offset_otb | integer | Timebase offset to be used by the demuxer on proper asset component. |
copy_source_start_pts | boolean | Copy PTS offset from the source to the target on each asset component |
Segmented_rendering
Example Segmented_rendering Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"source_pipeline": {
"segmented_rendering": {
"duration_sec": 180
}
},
"targets": [
{
"file_pattern": "{source_basename}_output.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"bitrate_kb": 128
}
]
}
]
}
}
transcode.source_pipeline.segmented_rendering
Name | Type | Description |
---|---|---|
duration_sec | number | Duration (in seconds) of a segment in segment encode mode. minimum: 1 |
pts_zero_base | boolean | Setting this to true will reset PTS stamps in the stream to a zero-based start. |
scene_changes_search_duration_sec | number | Duration (in seconds) to look for a dominant previous or following scene change. Note that the segment duration can then be up to duration_sec + scene_changes_search_duration_sec long. |
generate_extended_report | boolean | Setting this to true will produce extended report and commit to extended JSON. |
strict_cfr | boolean | Combiner will merge and re-stripe transport streams |
mux_offset_otb | integer | Timebase offset to be used by the muxer. |
min_preroll_sec | number | Minimum preroll (in seconds) of a segment in segment encode mode. |
min_postroll_sec | number | Minimum postroll (in seconds) of a segment in segment encode mode. |
combiner_task_tags | array |
A list of task tags for combiner task in case of segmented rendering. Tags are custom strings that are used to match tasks to specific computing groups. |
Scaler
Example Scaler Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"source_pipeline": {
"scaler": {
"kind": "zscale",
"config_string": "dither=error_diffusion",
"apply_always": true
}
},
"targets": [
{
"file_pattern": "{source_basename}_output.mp4",
"container": {
},
"video": {
},
"audio": [
{
}
]
}
]
}
}
transcode.source_pipeline.scaler
Name | Type | Description |
---|---|---|
kind | enum default zscale |
The type of scaling to be applied. default: default |
algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for scaling operations. This will apply to both up-scale and down-scale. These may be set separately using the upscale_algorithm and downscale_algorithm parameters. |
upscale_algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for up-scaling operations. |
downscale_algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for down-scaling operations. |
config_string | string | The configuration string to be used with the specified scaling function. |
apply_always | boolean | Always use the specified scaling function. |
Watermarking
Example Watermarking Object
{
"uid": "transcode_task_pass",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"watermarking": [
{
"kind": "nexguard_video",
"payload": {
"watermark_strength": "medium",
"license_manager": {
"ip": "192.168.21.100",
"port": 5093
},
"warnings_as_errors": {
"too_short": false
}
}
}
],
"targets": [
{
"file_pattern": "{source_basename}_watermarked.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"track_group_id": "V1",
"codec": "h264",
"width": 640,
"height": 360,
"frame_rate": 23.976,
"bitrate_kb": 600
},
"audio": [
{
"track_group_id": "A1",
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
}
]
}
}
Hybrik supports both audio and video watermarking through technology integrations with third-party companies like Nielsen and Nagra Kudelski. These watermarks are invisible (or inaudible) and can only be detected by a separate process. While Hybrik provides the integration with these third-party components, you must be a customer of these third-parties to actually use them. Please contact Dolby for more information regarding these technologies.
transcode.watermarking
Name | Type | Description |
---|---|---|
kind | enum nexguard_video |
Defines the watermark type. default: nexguard_video |
payload | anyOf | The payload for the specific watermark. |
Targets
Example Targets Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "my_first_output_2tracks.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264",
"profile": "high",
"level": "4.0",
"frame_rate": "24000/1001"
},
"audio": [
{
"codec": "aac",
"channels": 1,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 96
},
{
"codec": "aac",
"channels": 1,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 96
}
]
},
{
"file_pattern": "my_second_output_6channels.mp4",
"existing_files": "replace",
"container": {
"kind": "mov"
},
"video": {
"width": 1920,
"height": 1080,
"codec": "h264",
"profile": "high",
"level": "4.0",
"frame_rate": "30000/1001"
},
"audio": [
{
"codec": "aac_lc",
"channels": 6,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 256
}
]
}
]
}
}
Each output target is defined by an element in the "targets" array. Each target will use the same source, but can have completely different output parameters. If a particular parameter (e.g. "frame_rate") is not specified for a target, then the source's value for that parameter will be used.
transcode.targets
Name | Type | Description |
---|---|---|
uid | string | A UID (arbitrary string) to allow referencing this target. This UID may be used, for example, to specify a target for preview generation. User supplied, must be unique within an array of targets. |
manifest_uids | array | An array of UIDs defining the manifests that this target belongs to. A target may belong to one or more manifests. |
processing_group_ids | array | Allows target selection for subsequent tasks. |
location | object | A location that overrides any location defined within the parents of this encode target. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
trim | anyOf by_sec_in_out by_sec_in_dur by_timecode by_asset_timecode by_frame_nr by_section_nr by_media_track by_nothing |
Object defining the type of trim operation to perform on an asset. |
existing_files | enum delete_and_replace replace replace_late rename_new rename_org fail |
The desired behavior when a target file already exists. "replace": will delete the original file and write the new one. "rename_new": gives the new file a different auto-generated name. "rename_org": renames the original file. Note that renaming the original file may not be possible depending on the target location. "delete_and_replace": attempts to immediately delete the original file. This will allow for fast failure in the case of inadequate permissions. "replace_late": does not attempt to delete the original -- simply executes a write. default: fail |
parameter_compliance | enum strict relaxed |
Conflicts between parameters (for example if both CBR and max_bitrate are specified) will generate an error if "Strict" is specified, otherwise Hybrik will execute the "intent" of the parameters. |
force_local_target_file | boolean |
This will enforce the creation of a local file and bypass in-stream processing. Only used in scenarios where in-stream processing is impossible due to format issues. |
encryption_id | string | Encryption id, used for referencing encryptions |
include_if_source_has | array | This array allows for conditionally outputting tracks based on whether or not a specific input track exists. The tracks in the source are referred to by number reference: audio[0] refers to the first audio track. |
include_conditions | array | Specifies conditions under which this output will be created. Can use Javascript math.js nomenclature |
size_estimate | number string |
Setting a size_estimate can help in allocating the right amount of temporary local storage. Omit if this value if it cannot be guessed with a +/- 5% certainty. |
ffmpeg_args | string |
The FFmpeg command line to be used. Any properties defined in the target section of this JSON will override FFmpeg arguments defined here. |
ffmpeg_args_compliance | enum strict relaxed minimal late late_relaxed late_minimal |
Hybrik will interpret a ffmpeg command line. Relaxed will allow also unknown or conflicting ffmpeg options to pass. late_* will reolve this at render time and preserve the original ffmpeg_args in the JSON. |
hybrik_encoder_args | string |
The Hybrik encoder arguments line to be used. May overwrite default arguments. |
nr_of_passes | integer string |
This specifies how many passes the encode will use. minimum: 1 maximum: 10 default: 1 |
slow_first_pass | boolean |
h264/h265: enables a slow (more precise) first pass. |
compliance | enum xdcam_imx xdcam_hd xdcam_hd_422 xdcam_proxy xdcam_dvcam25 avcintra dvb atsc xavc hmmp senvu_2012 |
This setting can be used to force output compliance to a particular standard, for example XDCAM-HD. |
compliance_enforcement | enum strict relaxed |
Defines whether the output compliance will be strict or relaxed. Relaxed settings allow parameters to be overridden in the JSON. |
temp_location | object | Specify a location for temporary files (typically multi-pass). |
temp_file_prefix | string | Specify a prefix for every temp file (for shared folder/location use across various targets and tasks). |
container | object | The transcoding container parameters. |
video | object | The video parameters for the target output. |
audio | array | Array defining the audio tracks in the output. Each element of the array is an object defining the track parameters, including codec, number of channels, bitrate, etc. |
timecode | array | An array defining the timecode tracks in this output. |
metadata | array | Array defining the metadata tracks in the output. |
subtitle | array | Array defining the subtitle tracks in the output. |
Container
Example Container Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_output{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mpeg2ts"
},
"video": {
"codec": "h264",
"bitrate_mode": "cbr",
"bitrate_kb": 1000,
"profile": "main",
"level": "4.0",
"height": 720
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 128
}
]
}
]
}
}
transcode.targets.container
Name | Type | Description |
---|---|---|
verify | boolean | Enable or disable post transcode verification for this track. default: true |
kind | enum copy avi hls hls_subtitle dash-mp4 dash-vod dash-live dash-segment mp4 fmp4 segmented_mp4 mpegts segmented_ts mpeg2ts mov mxf mxf_d10 webm mkv nut ismv 3gp mpeg2video mp1_system mp2_program mp1_elementary mp2_elementary vob dvd aac mp3 wav aiff aa flac alaw mulaw ogg jpg j2k ass srt stl ttml imsc1 webvtt elementary dvbsub scc mcc |
The container (i.e. multiplexing) format. |
vendor | enum ap10 |
Insert this string as a "Vendor String" for those containers/packages that support it (such as MOV and MP4). |
movflags | string | The FFmpeg movflags. See https://www.ffmpeg.org/ffmpeg-formats.html for more information. |
muxrate_kb | integer | The multiplexer rate - only valid for MPEG transport streams. Omit to keep M2TS padding at a minimum. |
mux_warnings | enum as_errors ignore |
Allow to complete jobs which otherwise are failing based on multiplexer warnings. |
faststart | boolean | Enable progressive download for .mov and .mp4 files. |
transport_id | integer | Set the TS Transport ID - only used for MPEG transport streams. maximum: 8190 |
use_sdt | boolean | Whether or not to include a Service Description Table in the target Transport Stream. MediaInfo will display this as "Menu." |
pcr_pid | integer | Set the PCR PID - only used for MPEG transport streams. maximum: 8190 |
pmt_pid | integer | Set the PMT PID - only used for MPEG transport streams. maximum: 8190 |
pcr_interval_ms | integer | Set the PCR interval - only used for MPEG transport streams. minimum: 20 maximum: 1000 |
pmt_interval_ms | integer | Set the PMT interval - only used for MPEG transport streams. minimum: 20 maximum: 1000 |
pat_interval_ms | integer | Set the PAT interval - only used for MPEG transport streams. minimum: 20 maximum: 1000 |
ts_offset_ms | number | Set the ts offset in the output file. |
segment_duration_sec | number |
The segment duration in seconds for segmented or fragmented streams such as HLS or mp4/MPEG-DASH. Decimal notation (e.g. 5.5) is supported. |
vframe_align_segment_duration | boolean |
Ensure segment duration is an integer multiple of the frame duration. default: true |
auto_speed_change_delta_percent | number |
If the frame rate delta is larger than this value, do not attempt to speed-change. Default: just allows 29.97->30 and 23.97<->24 speed changes |
align_to_av_media | boolean | For subtitles only, align duration and segmenting to A/V media time. |
references_location | object | The location of payload files for containers having external references. |
title | string | An optional title. Note that not all multiplexers support adding a title. |
author | string | An optional author. Note that not all multiplexers support adding an author. |
copyright | string | An optional copyright string. Note that not all multiplexers support adding a copyright string. |
info_url | string | An optional info URL string. Note that not all multiplexers support adding a URL. |
filters | array | An array defining the filters to be applied at the container level. |
attributes | array | Container attributes. The specific meaning depends on container format. For dash-mp4 for example, these can be mpd xpath replacements. |
enable_data_tracks | boolean | By default, data tracks, such as time code, in mov are disabled/unchecked. This will enable all such tracks. |
mov_atoms | object | Override container or all-track MOV atoms. |
dvd_compatible | boolean | Enables constraints making enhancing DVD compatibility. Applies to kind='mp2_program/dvd/vob' only. |
brand | string | Setting the ftyp of a mp4/mov/3g file. Example: '3gp5'. |
compatible_brands | array | Appending to compatible ftyp(s) of a mp4/mov/3g file. Example: '["3gp5"]'. |
forced_compatible_brands | array | Replacing the compatible ftyp(s) of a mp4/mov/3g file. Example: '["3gp5"]'. |
mainconcept_mux_profile | enum VCD SVCD DVD DVD_MPEG1 DVD_DVR DVD_DVR_MPEG1 DVD_PVR DVD_PVR_MPEG1 DTV DVB MMV DVHS ATSC ATSCHI CABLELABS ATSC_C HDV_HD1 HDV_HD2 D10 D10_25 D10_30 D10_40 D10_50 HD_DVD |
MainConcept multiplexer profile. See the MainConcept documentation for details. |
mainconcept_mux_options | string |
Provide direct instruction to the MainConcept multiplexer. Values are constructed as "prop=val,prop=val". See MainConcept documentation for valid values. |
scte35 | oneOf scte35_in_source scte35_in_sidecar scte35_in_json |
Settings to control the insertion of SCTE35 markers into the output. |
Container Filters
Example Container Filters Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"container": {
"kind": "mp4",
"filters": [
{
"speed_change": {
"factor": 1.1,
"pitch_correction": true
}
}
]
},
"video": {
"codec": "h264",
"bitrate_mode": "vbr",
"bitrate_kb": 1000,
"max_bitrate_kb": 1200,
"frame_rate": 25,
"profile": "main",
"level": "4.0",
"height": 720
},
"audio": [
]
}
]
}
}
Container filters affect both the audio and video components of the output.
transcode.targets.container.filters
Name | Type | Description |
---|---|---|
kind | enum speed_change |
Specifies the type of container filter to be applied. default: speed_change |
include_conditions | array | Specifies conditions under which this filter will be applied. Can use Javascript math.js nomenclature. |
payload | anyOf speed_change |
The payload for the container filter. |
Speed_change
transcode.targets.container.filters.payload.speed_change
Example Speed_change Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"container": {
"kind": "mp4",
"filters": [
{
"kind": "speed_change",
"payload": {
"factor": "25/(24000/1001)",
"pitch_correction": true
}
}
]
},
"video": {
"codec": "h264",
"bitrate_mode": "vbr",
"bitrate_kb": 1000,
"max_bitrate_kb": 1200,
"frame_rate": 25,
"profile": "main",
"level": "4.0",
"height": 720
},
"audio": [
]
}
]
}
}
Name | Type | Description |
---|---|---|
factor | number string |
The speed change to be applied. The default is 1.0. Can use expressions such as (24000/1001)/25 default: 1 |
pitch_correction | boolean |
Correct the audio pitch of the speed changed streams. default: true |
Attributes
transcode.targets.container.attributes
Example Attributes Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"container": {
"kind": "mp4",
"attributes": [
{
"name": "custom_tag1",
"value": "custom_value1"
},
{
"name": "custom_tag2",
"value": "custom_value2"
}
]
},
"video": {
},
"audio": [
]
}
]
}
}
Name | Type | Description |
---|---|---|
name | string | The name component of a name/value pair. |
value | string | The value component of a name/value pair. |
Mov_atoms
transcode.targets.container.mov_atoms
Example Mov_atoms Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"container": {
"kind": "mov",
"mov_atoms": {
"no_empty_elst": true,
"no_negative_cts": true
}
},
"video": {
},
"audio": [
]
}
]
}
}
Name | Type | Description |
---|---|---|
no_empty_elst | boolean | Avoid writing an initial elst entry in edts. |
no_negative_cts | boolean | Avoiding negative cts (in mov and mp4) to support quicktime 7.* and lower. |
Scte35_in_source
transcode.targets.container.scte35_in_source
Example Scte35_in_source Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_scte35.ts",
"existing_files": "replace",
"container": {
"kind": "mpegts",
"scte35": {
"write_scte35_packets": true,
"use_source_media": "if_exists"
},
"pmt_pid": 480,
"transport_id": 1,
"muxrate_kb": 2860
},
"video": {
"codec": "mpeg2",
"pid": 481,
"width": 720,
"height": 480,
"max_bframes": 2,
"idr_interval": {
"frames": 90
}
},
"audio": [
{
"pid": 482,
"codec": "ac3",
"channels": 2,
"sample_rate": 48000,
"bitrate_kb": 224,
"language": "eng"
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
use_source_media | enum required if_exists |
Use the SCTE35 data in the source. |
write_scte35_packets | boolean | This setting may be set to false in order to insert i-frames at the locations defined by the SCTE35 metadata but not actually insert the SCTE35 packets. default: true |
Scte35_in_sidecar
transcode.targets.container.scte35_in_sidecar
Example Scte35_in_sidecar Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_scte35.ts",
"existing_files": "replace",
"container": {
"kind": "mpegts",
"scte35": {
"write_scte35_packets": true,
"splicepoint_file": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder/my_splicepointfile.json"
}
},
"pmt_pid": 480,
"transport_id": 1,
"muxrate_kb": 2860
},
"video": {
"codec": "mpeg2",
"pid": 481,
"width": 720,
"height": 480,
"max_bframes": 2,
"idr_interval": {
"frames": 90
}
},
"audio": [
{
"pid": 482,
"codec": "ac3",
"channels": 2,
"sample_rate": 48000,
"bitrate_kb": 224,
"language": "eng"
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
splicepoint_file | object | The location of the file with the insertion points for the SCTE35 markers. |
write_scte35_packets | boolean | This setting may be set to false in order to insert i-frames at the locations defined by the SCTE35 metadata but not actually insert the SCTE35 packets. default: true |
Scte35_in_json
transcode.targets.container.scte35_in_json
Example Scte35_in_json Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_scte35.ts",
"existing_files": "replace",
"container": {
"kind": "mpegts",
"scte35": {
"write_scte35_packets": true,
"scte35_sections": [
{
"insertTime": 180000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 8,
"program": {
"spliceTime": {
"ptsTime": 360000
}
}
}
}
},
{
"insertTime": 450000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 9,
"program": {
"spliceTime": {
"ptsTime": 630000
}
}
}
}
},
{
"insertTime": 810000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 10,
"program": {
"spliceTime": {
"ptsTime": 900000
}
}
}
}
}
]
},
"pmt_pid": 480,
"transport_id": 1,
"muxrate_kb": 2860
},
"video": {
"codec": "mpeg2",
"pid": 481,
"width": 720,
"height": 480,
"max_bframes": 2,
"idr_interval": {
"frames": 90
}
},
"audio": [
{
"pid": 482,
"codec": "ac3",
"channels": 2,
"sample_rate": 48000,
"bitrate_kb": 224,
"language": "eng"
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
scte35_sections | array | An array describing the insertion points for the SCTE35 markers. Each element in the array describes the parameters of the SCTE35 marker. |
write_scte35_packets | boolean | This setting may be set to false in order to insert i-frames at the locations defined by the SCTE35 metadata but not actually insert the SCTE35 packets. default: true |
Scte35_sections
transcode.targets.container.scte35_in_json.scte35_sections
Example Scte35_sections Object
{
"file_pattern": "{source_basename}_scte35.ts",
"existing_files": "replace",
"container": {
"kind": "mpegts",
"scte35": {
"write_scte35_packets": true,
"scte35_sections": [
{
"insertTime": 180000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 8,
"program": {
"spliceTime": {
"ptsTime": 360000
}
}
}
}
},
{
"insertTime": 450000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 8,
"program": {
"spliceTime": {
"ptsTime": 630000
}
}
}
}
}
],
"pmt_pid": 480,
"transport_id": 1,
"muxrate_kb": 2860
}
}
}
Name | Type | Description |
---|---|---|
insertTime | integer | The insertion time for the SCTE35 marker. |
spliceInfoSection | object | The splice information for a specific ad. |
SpliceInfoSection
transcode.targets.container.scte35_in_json.scte35_sections.spliceInfoSection
Example SpliceInfoSection Object
{
"file_pattern": "{source_basename}_scte35.ts",
"existing_files": "replace",
"container": {
"kind": "mpegts",
"scte35": {
"write_scte35_packets": true,
"scte35_sections": [
{
"insertTime": 180000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 8,
"program": {
"spliceTime": {
"ptsTime": 360000
}
}
}
}
},
{
"insertTime": 450000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 8,
"program": {
"spliceTime": {
"ptsTime": 630000
}
}
}
}
}
],
"pmt_pid": 480,
"transport_id": 1,
"muxrate_kb": 2860
}
}
}
Name | Type | Description |
---|---|---|
spliceInsert | object | The splice information for a specific insertion point. |
SpliceInsert
transcode.targets.container.scte35_in_json.scte35_sections.spliceInfoSection.spliceInsert
Example SpliceInsert Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_scte35.ts",
"existing_files": "replace",
"container": {
"kind": "mpegts",
"scte35": {
"write_scte35_packets": true,
"scte35_sections": [
{
"insertTime": 180000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 8,
"program": {
"spliceTime": {
"ptsTime": 360000
}
}
}
}
}
]
}
},
"video": {
"codec": "mpeg2",
"width": 720,
"height": 480
},
"audio": [
{
"codec": "ac3",
"channels": 2,
"sample_rate": 48000,
"bitrate_kb": 224
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
spliceEventId | integer | The ID for the splice event. |
spliceEventCancelIndicator | boolean | In a broadcast, indicates whether a specific insertion has been cancelled. |
outOfNetworkIndicator | boolean | True indicates cue-out from the network (the start of an ad). False indicates cue-in from the ad to the network. |
programSpliceFlag | boolean | Setting this flag to true indicates Program Splice Mode, where setting it to false indicates a Component Splice Mode. |
uniqueProgramId | integer | A unique identifier for the viewing event. |
availNum | integer | An identification for a specific avail within one Unique Program ID. |
availsExpected | integer | The count for the expected number of individual avails within the current viewing event. If this field is set to zero, then the availNum filed is ignored. |
program | object | Object to specify the spliceTime of the Program. |
Program
transcode.targets.container.scte35_in_json.scte35_sections.spliceInfoSection.spliceInsert.program
Example Program Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_scte35.ts",
"existing_files": "replace",
"container": {
"kind": "mpegts",
"scte35": {
"write_scte35_packets": true,
"scte35_sections": [
{
"insertTime": 180000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 8,
"program": {
"spliceTime": {
"ptsTime": 360000
}
}
}
}
}
]
}
},
"video": {
"codec": "mpeg2",
"width": 720,
"height": 480
},
"audio": [
{
"codec": "ac3",
"channels": 2,
"sample_rate": 48000,
"bitrate_kb": 224
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
spliceTime | object | The Program spliceTime. |
SpliceTime
transcode.targets.container.scte35_in_json.scte35_sections.spliceInfoSection.spliceInsert.program.spliceTime
Example SpliceTime Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_scte35.ts",
"existing_files": "replace",
"container": {
"kind": "mpegts",
"scte35": {
"write_scte35_packets": true,
"scte35_sections": [
{
"insertTime": 180000,
"spliceInfoSection": {
"spliceInsert": {
"spliceEventId": 8,
"program": {
"spliceTime": {
"ptsTime": 360000
}
}
}
}
}
]
}
},
"video": {
"codec": "mpeg2",
"width": 720,
"height": 480
},
"audio": [
{
"codec": "ac3",
"channels": 2,
"sample_rate": 48000,
"bitrate_kb": 224
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
ptsTime | integer | The spliceTime in PTS. |
Video
Example Video Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr"
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 48000
}
]
}
]
}
}
transcode.targets.video
Name | Type | Description |
---|---|---|
include_if_source_has | array | This array allows for conditionally outputting tracks based on whether or not a specific input track exists. The tracks in the source are referred to by number reference: audio[0] refers to the first audio track. |
include_conditions | array | Include this target output if these conditions are met. |
enabled | boolean | Enable or disable video in output. default: true |
verify | boolean | Enable or disable post transcode verification for this track. default: true |
codec | enum copy h264 h265 prores mpeg1 mpeg2 mpeg4 vp8 vp9 dv25 dv50 dnxhd mjpeg jpeg2000 raw png jpeg |
The desired output video codec. |
codec_provider | enum ffmpeg dolby_impact mainconcept_v10 mainconcept_v11 mainconcept_v13 |
The desired provider/brand of the video codec. For more details on Dolby Impact, please read our tutorial - https://tutorials.hybrik.com/dolby_impact/ |
pid | integer |
The video program ID - only used for MPEG transport streams. maximum: 8190 |
track_group_id | string | This indicates which Group this track belongs to. Multiple tracks with the same content but different bitrates would have the same track_group_id. |
layer_id | string | This indicates which Layer this tracks belongs to. For example, this allows bundling one video layer and multiple audio layers with same bitrates but different languages. |
layer_affinities | array | This indicates which other layers this layer can be combined with. For example, to combine audio and video layers. |
width | number |
Width of the output video. minimum: 8 maximum: 8192 |
height | number |
Height of the output video. minimum: 8 maximum: 8192 |
width_modulus | integer |
If width is calculated automatically (from aspect ratio and height for example), then only allow truncated integer multiples of this value. minimum: 1 maximum: 64 default: 2 |
height_modulus | integer |
If height is calculated automatically (from aspect ratio and width for example), then only allow truncated integer multiples of this value. minimum: 1 maximum: 64 default: 2 |
frame_rate | number string |
The video frame rate - can be expressed in decimal or fraction notation, examples: 29.97, 30000/1001 |
par | number string |
The pixel aspect ratio. Optional. May be expressed in decimal or fraction notation, examples: 0.9, 8/9 |
dar | number string |
The display aspect ratio. Optional. May expressed in decimal or fraction notation, examples: 1.33, 4/3 |
ar_max_distortion | number |
A small amount of distortion can be allowed in order to minimize letter- or pillar-boxing. The default is 5%. maximum: 1 default: 0.05 |
ar_auto_crop | enum none distorted preserving |
If an aspect ratio adjustment needs to occur, setting 'none' here will add full letter/pillar boxes. Selecting 'distorted' will reduce the required padding and cropping, as determined by ar_pad_crop_ratio, by distorting the image with ar_max_distortion. Selecting 'preserving' will do a full aspect ratio correct operation, using the cropping vs. padding ratio from ar_pad_crop_ratio. default: none |
ar_pad_crop_ratio | number |
For reducing letter and pillar boxing, the video can instead be slightly or fully cropped. Setting a value of 1 here will never crop, and add full letter/pillar boxes. Setting a value of 0 will fully crop with no letter/pillar boxing. If ar_auto_crop is set to 'distorted', then ar_max_distortion is considered prior to calculating the required cropping is determined by ar_pad_crop_ratio. Currently, only 0.0 and 1.0 are supported. The full range will be supported in a future release. maximum: 1 default: 1.0 |
video_format | enum component pal ntsc secam mac unspecified |
Video format flag (metadata only). |
add_vbi_if_needed | boolean |
If the video height is 480 or 576, pad the video with 32 top lines without distorting the aspect ratio. |
interlace_mode | enum progressive tff bff |
The interlacing mode: progressive, top field first, or bottom field first. |
smart_temporal_conversions | boolean | If source/dest interlacing properties or frame rates differ, automatically apply the best possible conversion. |
smart_chroma_conversions | boolean | If source/dest hd/sd properties or color space/matrix/primary differ, automatically apply the best possible conversion. |
chroma_format | enum yuv411p yuv420p yuv422p yuv420p10le yuv422p10le yuv444p10le yuva444p10le yuv420p12le yuv422p12le yuv444p12le yuva444p12le yuv420p16le yuv422p16le yuv444p16le yuva444p16le yuvj420p yuvj422p rgb24 rgb48be rgba64be rgb48le rgba64le gbrp10le gbrp10be gbrp12le gbrp12be |
The pixel format. Note that not all codecs will support all formats. |
ire_range_mode | enum auto full limited |
Chroma coordinate reference of the source primaries. The default is determined by video size. |
color_primaries | enum bt601 bt709 bt470m bt470bg smpte170m smpte240m smpte431 smpte432 bt2020 film |
Chroma coordinate reference of the source primaries. The default is determined by video size. |
color_trc | enum bt601 bt709 st2084 bt470m gamma22 bt470bg gamma28 smpte170m smpte240m smpte428 linear log log sqrt bt1361 ecg iec61966 2.1 iec61966 2.4 bt2020_10bit bt2020_12bit hlg arib_stdb67 |
Color transfer characteristics. The default determined by video size. |
color_matrix | enum rgb bt470bg bt601 bt709 smpte170m smpte240m bt2020c bt2020nc smpte2085 |
YUV/YCbCr colorspace type. |
use_broadcast_safe | boolean | This will limit signal values to permitted IRE levels (using 7.5..100 IRE). default: false |
force_source_par | number string |
This will override the automatically detected source pixel aspect ratio. Can be omitted, or set using a numeric or fractional designation such as 0.9, 8/9, etc. |
force_source_dar | number string |
This will override the automatically detected source display aspect ratio. Can be omitted or set using one of the following formats: 1.33, 4/3, ... |
profile | enum baseline simple main main10 main-intra mainstillpicture main444-8 main444-intra main444-stillpicture main10-intra main422-10 main422-10-intra main444-10 main444-10-intra high high10 high422 high444 MP HP SP 422P apco apcs apcn apch ap4h ap4x dnxhr_lb dnxhr_sq dnxhr_hq dnxhr_hqx dnxhr_444 jpeg2000 p0 p1 cinema2k cinema4k cinema2k_scalable cinema4k_scalable cinema_lts bc_single bc_multi bc_multi_r imf_2k imf_4k imf_8k imf_2k_r imf_4k_r imf_8k_r |
The profile for your codec. Not all profiles are valid for all codecs. |
level | enum 1.0 1.1 1.2 1.3 2.0 2.1 2.2 3.0 3.1 3.2 4.0 4.1 4.2 5.0 5.1 5.2 6.0 LL ML HL H14 |
The codec-dependent level - please reference ISO/IEC 14496-10, ISO/IEC 13818-2 etc. |
mainlevel | integer | The codec-dependent main level - please reference ISO/IEC 15444-1 (J2K) etc. |
sublevel | integer | The codec-dependent sub level - please reference ISO/IEC 15444-1 (J2K) etc. |
preset | enum ultrafast superfast veryfast faster fast medium slow slower veryslow placebo |
Codec-dependent preset, applies to h264 and h265 only. |
tune | enum psnr ssim fastdecode zerolatency grain film animation stillimage touhou |
Codec-dependent tune option, applies to vp9, h264 and h265 only. Allowed values depend on codec. |
use_cabac | boolean | This will enable context-adaptive binary arithmetic coding for h.264. If not set, the profile/level combination will determine if CABAC is used. |
refs | integer |
The number of h.264 reference frames to used for future frames. If not set, the profile/level combination will determine the proper number of reference frames. maximum: 16 |
slices | integer |
The number of h.264 frame slices. |
use_loop_filter | boolean | Enable h264/h265 loop filters. |
x264_options | string |
x.264 specific codec options - please reference https://sites.google.com/site/linuxencoding/x264-ffmpeg-mapping for an excellent explanation. |
x265_options | string |
x.265 specific codec options - please reference https://x265.readthedocs.io/en/default/cli.html |
dolby_impact_options | string |
Dolby Impact HEVC specific codec options. |
mainconcept_video_options | string |
MainConcept specific codec options - please reference the mainconcept codec documentation. |
mainconcept_video_profile | enum VCD SVCD DVD DVD_MPEG1 DVD_DVR DVD_DVR_MPEG1 DVD_PVR DVD_PVR_MPEG1 DTV DVB MMV DVHS ATSC ATSCHI CABLELABS ATSC_C HDV_HD1 HDV_HD2 D10 D10_25 D10_30 D10_40 D10_50 HD_DVD |
One of the preset values for profile (e.g. CABLELABS). |
mainconcept_stream_mux_options | string |
Provide direct stream instruction to the MainConcept multiplexer. Values are constructed as "prop=val,prop=val". See MainConcept documentation for valid values. |
ffmpeg_args | string |
The FFmpeg (target) command line arguments to be used. Note that these will override competing settings in the JSON. |
encoder_info | boolean | Override encoder string inserted by x264 or x265 encoders. |
bitrate_mode | enum cbr cbr_unconstrained crf vbr cq |
The bitrate mode for the codec. The default value depends on the codec being used. "crf" bitrate mode (Constant Rate Factor) is only valid for x264 and x265. |
bitrate_kb | number |
The video bitrate in kilobits per second. For vbr, this is the average bitrate. minimum: 1 maximum: 10000001 |
min_bitrate_kb | number |
The minimum video bitrate in kilobits per second. Only valid for crf and vbr. minimum: 1 maximum: 10000001 |
max_bitrate_kb | number |
The maximum video bitrate in kilobits per second. Only valid for crf and vbr. minimum: 1 maximum: 10000001 |
vbv_buffer_size_kb | number |
The vbv buffer size in kilobits. maximum: 1000000 |
vbv_init_occupancy_kb | number |
The vbv init occupancy in kilobits. Important for chunked encoding like HLS. maximum: 1000000 |
max_available_vbv | number |
The maximium vbv fullness (0 = 0%, 1 = 100%). maximum: 1 |
min_available_vbv | number |
The minimum vbv fillness (0 = 0%, 1 = 100%). maximum: 1 |
vbv_constraints_failure | enum during_pass_1 before_pass_2 after_pass_2 |
Specify when during the transcode to fail if VBV constraints can not be met. |
hrd_signaling | enum encoder multiplexer |
Add hrd-parameters to h265 encoding if set to encoder, no need to configure codec specific parameters. |
use_closed_gop | boolean | Use closed GOPs - not valid for all codecs. |
first_gop_closed | boolean | First GOP only shall be closed - only valid for Mainconcept MPEG2. |
max_bframes | integer |
The maximum number of B frames between I and P frames. maximum: 100 |
idr_interval | anyOf default |
Describes a frame interval, as count or seconds. |
iframe_interval | anyOf default |
Describes a frame interval, as count or seconds. |
forced_keyframes | object | Allows forcing keyframe insertion at specific frames or times. |
crf | number |
The Constant Rate Factor setting for h.264 and h.265. A setting of 18 is considered excellent. A change of plus/minus 6 should half/double the resulting file size. See https://trac.ffmpeg.org/wiki/Encode/H.264 maximum: 63 |
q | number |
A setting to determine quality, effect depends on the codec used |
qscale | number |
A setting to determine QScale difference between I and P frames. Codec dependent, see https://www.ffmpeg.org/ffmpeg-codecs.html |
qmin | number |
The video minimum quantizer scale. minimum: -1 maximum: 69 |
qmax | number |
The video maximum quantizer scale. minimum: -1 maximum: 1024 |
dc_precision | integer |
The number of bits to use in calculating the DC component of intra-coded blocks. minimum: 8 maximum: 10 |
use_sequence_display_extension | boolean | This will write the sequence display extension (MPEG2 only). |
use_sequence_header_per_gop | boolean | This will write a sequence header for each gop. |
use_intra | boolean | Set to use only I-frames. Default depends on the codec. |
use_intra_vlc | boolean | Set to use intra-vlc tables only (MPEG2 only). |
use_non_linear_quant | boolean | Set to use non-linear quantizer (MPEG2 only). |
use_interlace_encode_mode | boolean | This determines if the codec shall be forced to perform interlaced (field-separated) encodes. Default: "auto". Not to be confused with interlace_mode. |
use_scene_detection | boolean | Enable or disable scene change detection. Disabling will come with a steep penalty on video quality. default: true |
use_low_delay | boolean | Instruct the encoder to use low delay encoding modes, exact meaning varies by codec. |
rtp_payload_size | integer | RTP payload size in bytes. |
afd | number |
Set AFD (Active Format Description ) value. This is only supported in MXF outputs. |
vtag | string |
Allows overriding the default hev1 or hvc1 tags applied to HEVC content. |
track_name | string |
The name of this video track - will be used for mov files and MPEG-DASH (representation::id) for example. May be ignored, depending on your container format. |
closed_captions | object | Object describing the CC parameters for the targeted output. |
mpeg2 | object | A set of MPEG2-specific options for closed captions and telecine. |
mov_atoms | object | Override video track MOV atoms. |
hdr10 | object | Object describing the HDR10 metadata source location and mastering display characteristics. |
dolby_vision | object | Object describing Dolby Vision encoding options. |
image_sequence | object | Object defining the settings to be used when outputting a sequence of images. |
rotation | number string |
The video rotation. Optional. May be expressed in decimal, examples: 90.0 |
filters | array | An array of video filters to be applied to the output targets. |
scaler | object | The type of function to be used in scaling operations. |
Default
transcode.targets.video.idr_interval.default
Example Default Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"idr_interval": {
"default": {
"min_frames": 30,
"max_frames": 60
}
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
min_frames | integer | Minimum number of frames for the frame interval. |
max_frames | integer | Maximum number of frames for the frame interval. |
Default
transcode.targets.video.iframe_interval.default
Example Default Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"iframe_interval": {
"default": {
"min_frames": 30,
"max_frames": 60
}
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
min_frames | integer | Minimum number of frames for the frame interval. |
max_frames | integer | Maximum number of frames for the frame interval. |
Forced_keyframes
transcode.targets.video.forced_keyframes
Example Forced_keyframes Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"forced_keyframes": {
"frames": [
1000,
10000,
30000
]
}
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
kind | enum i idr |
The type of keyframes to be created - i or idr. |
frames | array | An array of frame numbers to use for the keyframe insertion. |
times_sec | array | An array of locations in seconds to use for the keyframe insertion. |
timecodes | array | An array of timecode values to use for the keyframe insertion. |
Closed_captions
transcode.targets.video.closed_captions
Example Closed_captions Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"closed_captions": {
"enable_cea608": true,
"enable_cea708": true
}
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
suppress | boolean | Suppress including captions in the output. |
packets_per_chunk | integer | How many CC packets to insert into frames without CC. |
enable_scte20 | enum true false auto |
Enable SCTE 20 compatible captions in the output. Set to true, false(default), or "auto". |
enable_scte128 | enum true false auto |
Enable SCTE 128 compatible captions. Set to true, false, or "auto"(default) |
enable_a53 | enum true false auto |
Enable NTSC A53 compatible captions. Set to true, false, or "auto"(default) default: auto |
enable_quicktime_c608 | enum true false auto |
Enable NTSC 608 compatible captions in the QuickTime output. Set to true, false(default), or "auto" |
enable_quicktime_c708 | enum true false auto |
Enable NTSC 708 compatible captions in the QuickTime output. Set to true, false(default), or "auto" |
enable_smpte436m | enum true false auto |
Enable SMPTE 436M compatible captions in the MXF output. Set to true, false(default), or "auto" |
enable_cea608 | enum true false auto |
Enable NTSC 608 compatible captions in the output. Set to true, false, or "auto"(default) default: auto |
enable_cea708 | enum true false auto |
Enable NTSC 708 compatible captions in the output. Set to true, false, or "auto"(default) default: auto |
cc1_608_to_708_service_mapping | integer | This allows control over which 708 service will receive the 608 CC1 caption data. The default behavior for CC1 is to be mapped to 708 Service 1. |
cc2_608_to_708_service_mapping | integer | This allows control over which 708 service will receive the 608 CC2 caption data. |
cc3_608_to_708_service_mapping | integer | This allows control over which 708 service will receive the 608 CC3 caption data. The default behavior for CC3 is to be mapped to 708 Service 2. |
cc4_608_to_708_service_mapping | integer | This allows control over which 708 service will receive the 608 CC4 caption data. |
minimize_cea708_window_size | boolean | This allows to automatically minimize cea708 windows size. Set to true or false(default) |
Mpeg2
transcode.targets.video.mpeg2
Example Mpeg2 Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.ts",
"existing_files": "replace",
"container": {
"kind": "mpeg2ts"
},
"video": {
"codec": "mpeg2",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"closed_captions": {
"enable_cea608": true,
"enable_cea708": true
}
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 48000
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
soft_telecine | boolean | This will produce a file with field repeat flags set, playing interlaced @ 29.97. Requires a frame rate of 24000/1001 and the hybrik_3.2 or greater ffmpeg version. |
Mov_atoms
transcode.targets.video.mov_atoms
Example Mov_atoms Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mov",
"existing_files": "replace",
"container": {
"kind": "mov"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"mov_atoms": {
"tapt": {
"clef": "1920:1080",
"prof": "1920:1080",
"enof": "1920:1080"
}
}
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 48000
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
copy | array | Array of atoms to be copied. |
pasp | string | Override the PASP atom in the form x:y. |
gama | number string |
Override the GAMA atom as a float value |
fiel | string | Override the FIEL atom with a pre-defined quicktime API integer value in hex notation (https://developer.apple.com/library/content/documentation/QuickTime/QTFF/QTFFChap3/qtff3.html#//apple_ref/doc/uid/TP40000939-CH205-124374) |
tapt | object | Override the TAPT atom with individual settings. |
clap | string | Override the CLAP atom in the form x:x:x:x:x:x:x:x (8 entries). |
use_clap | boolean | Set this to false to prevent writing the CLAP atom. default: true |
media_uuid | string | Provide UUID in the form '324D8401-7083-4F5F-A0B1-D768CED82E43' |
encoder | string | Provide STSD encoder string. |
Copy
transcode.targets.video.mov_atoms.copy
Example Copy Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mov",
"existing_files": "replace",
"container": {
"kind": "mov"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"mov_atoms": {
"copy": [
"clef",
"prof"
]
}
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 48000
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
atom | enum defaults colr fiel tapt pasp |
Which atom to copy from source. |
Tapt
transcode.targets.video.mov_atoms.tapt
Example Tapt Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mov",
"existing_files": "replace",
"container": {
"kind": "mov"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"mov_atoms": {
"tapt": {
"clef": "1920:1080",
"prof": "1920:1080",
"enof": "1920:1080"
}
}
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 48000
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
clef | string | Override clef in the form width:height |
prof | string | Override prof in the form width:height |
enof | string | Override enof in the form width:height |
Hdr10
transcode.targets.video.hdr10
Example Hdr10 Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_hdr10.mp4",
"existing_files": "replace",
"nr_of_passes": 2,
"container": {
"kind": "mp4"
},
"video": {
"codec": "h265",
"width": 1920,
"height": 1080,
"bitrate_kb": 12000,
"max_bitrate_kb": 13200,
"vbv_buffer_size_kb": 13200,
"bitrate_mode": "vbr",
"chroma_format": "yuv420p10le",
"profile": "main10",
"level": "5.0",
"color_primaries": "bt2020",
"color_trc": "st2084",
"color_matrix": "bt2020nc",
"hdr10": {
"source": "config",
"master_display": "G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(40000000,47)",
"max_cll": 4000,
"max_fall": 1000
}
}
}
]
}
}
Name | Type | Description |
---|---|---|
source | enum config source_metadata source_document media metadata_file none |
The source for the HDR10 metadata. |
master_display | string |
The mastering display brightness. |
max_cll | number |
The maximum Content Light Level (CLL) for the file. maximum: 10000 |
max_fall | number |
The maximum Frame Average Light Level (FALL) for the file. maximum: 10000 |
Dolby_vision
transcode.targets.video.dolby_vision
Example Dolby_vision Object
{
"kind": "transcode",
"uid": "transcode_task",
"payload": {
{
"location": {
"storage_provider": "s3",
"path": "{{destination_location}}"
},
"options": {
"pipeline": {
"encoder_version": "hybrik_4.2_10bit"
}
},
"targets": [
{
"file_pattern": "dolby_vision_example.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h265",
"width": "1920",
"height": "1080",
"bitrate_kb": "16000",
"max_bitrate_kb": "18000",
"bitrate_mode": "vbr",
"dolby_vision": {
"profile": "5"
}
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
profile | enum 5 8.1 hdr10 sdr |
The Dolby Vision Profile. |
compatible_peak_brightness | integer |
Peak brightness in candela per sqm for the optional content mapping of the HDR10 compatible layer. Only 1,000 is allowed. minimum: 100 maximum: 10000 |
force_dm_version | enum 3 |
Force Dolby Vision metadata to be mapped to a specific version of the Dolby Vision Display Management (DM). |
convert_to_dolby_vision | object | Dolby Vision SDR/HDR conversion options |
Convert_to_dolby_vision
transcode.targets.video.dolby_vision.convert_to_dolby_vision
Example Convert_to_dolby_vision Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"options": {
"pipeline": {
"encoder_version": "hybrik_4.2_10bit"
}
},
"targets": [
{
"existing_files": "replace",
"file_pattern": "convert_to_dolby_vision_example.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h265",
"width": 1920,
"height": 1080,
"bitrate_kb": "16000",
"max_bitrate_kb": "18000",
"bitrate_mode": "vbr",
"chroma_format": "yuv420p10le",
"idr_interval": {
"seconds": "{{idr_interval_secs}}"
},
"dolby_vision": {
"profile": "5",
"convert_to_dolby_vision": "simple_conversion"
}
}
}
]
}
}
Name | Type | Description |
---|---|---|
mode | enum no_conversion simple_conversion |
Algorithm to use converting SDR/HDR content to Dolby Vision default: no_conversion |
max_luminance_from_sdr | integer |
Define peak brightness level of visible light (luminance) within a specific area for gamma transfer function. minimum: 100 maximum: 400 default: 200 |
max_luminance_from_hlg | integer |
Define peak brightness level of visible light (luminance) within a specific area for hybrid log–gamma transfer function. minimum: 400 maximum: 1000 default: 600 |
Image_sequence
transcode.targets.video.image_sequence
Example Image_sequence Object
{
"uid": "transcode_thumbnails",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder/thumbnails"
},
"targets": [
{
"uid": "thumbnails",
"file_pattern": "thumb_%05d.jpg",
"existing_files": "replace",
"video": {
"codec": "jpeg",
"width": 256,
"height": 144,
"image_sequence": {
"total_number": 100,
"offset_sec": 0
}
}
}
]
}
}
Name | Type | Description |
---|---|---|
total_number | integer | The total number of images to create when outputting a sequence of images. minimum: 1 |
start_number | integer | The starting number for files to use when outputting a sequence of images. |
offset_sec | number | The number of seconds to offset the output. |
relative_offset | number | The relative offset (percentage) to offset the output. |
Video Filters
Example Video Filters Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_output.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"bitrate_mode": "vbr",
"bitrate_kb": 1800,
"max_bitrate_kb": 2000,
"height": 720,
"filters": [
{
"kind": "crop",
"payload": {
"top": 20,
"bottom": 20
}
},
{
"kind": "print_timecode",
"payload": {
"x": "(w-tw)/2",
"y": "h/4",
"font_size": 20,
"source_timecode_selector": "gop"
}
},
]
},
"audio": [
{
"codec": "aac_lc",
"channels": 2,
"bitrate_kb": 96
}
]
}
]
}
}
transcode.targets.video.filters
Name | Type | Description |
---|---|---|
kind | enum print_timecode print_subtitle crop gaussian_blur text_overlay image_overlay video_overlay telecine deinterlace fade color_convert ffmpeg |
Specifies the type of filter being applied. default: ffmpeg |
options | object | The filter options. |
include_conditions | array | Specifies conditions under which this filter will be applied. Can use Javascript math.js nomenclature |
overrides | object | Object defining parameters to be overrideen in the filter. |
payload | anyOf print_timecode print_subtitle crop ffmpeg gaussian_blur text_overlay image_overlay video_overlay telecine deinterlace fade color_convert |
The payload of the video filter. |
Options
transcode.targets.video.filters.options
Example Options Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "color_convert",
"options": {
"position": "pre_convert"
},
"payload": {
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
position | enum pre_analyze pre_normalize post_normalize pre_convert post_convert default |
Specifies where in the transcode process source pipeline filters will be applied. default: default |
Print_timecode
transcode.targets.video.filters.payload.print_timecode
Example Print_timecode Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "print_timecode",
"payload": {
"x": "(w-tw)/2",
"y": "h/4",
"font_size": 20,
"source_timecode_selector": "gop"
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
x | integer string |
The x location to start the imprint. Can use expressions such as w-20 (w: width of the video). default: 25 |
y | integer string |
The y location to start the imprint. Can use expressions such as h-20 (h: height of the video). default: 25 |
font | string | The font descriptor (compliant with fontconfig). Examples: 'Sans', 'URW Bookman L:style=Demi Bold Italic'. |
font_size | integer |
The font size in points. A font size of 16 is the default. default: 16 |
font_color | string |
See https://www.ffmpeg.org/ffmpeg-utils.html#Color for valid definitions. Example: blue: opaque blue, green@0.8: green with 0.8 alpha. default: green |
background_color | string |
See https://www.ffmpeg.org/ffmpeg-utils.html#Color for valid definitions. Example: blue: opaque blue, green@0.8: green with 0.8 alpha. |
border_size | integer |
Size of a border being drawn in background color. default: 0 |
timecode_kind | enum timecode_auto timecode_drop timecode_nodrop frame_nr media_time |
Choose the time/timecode format. If timecode_auto is used, drop/non-drop is chosen based on the frame rate. default: timecode_auto |
timecode_source | enum auto start_value media |
Select the timecode source for imprinting. default: media |
source_timecode_selector | enum first highest lowest mxf gop sdti smpte material_package source_package |
Specifies the metadata track to be used for time code data. [DESC] default: first |
timecode_start_value | string |
Start time. The units depends on the kind. Only valid for timecode_source=start_value. |
lut_file | object | Allows referencing hosted LUT files, for example for SDR imprint into HDR video. |
lut_preset | enum r709_to_r2020_pq_300nit |
The LUT preset selection. |
Print_subtitle
transcode.targets.video.filters.payload.print_subtitle
Example Print_subtitle Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "print_subtitle",
"payload": {
"x_offset": "10%",
"y_offset": "20%",
"imprint_style": "ttml",
"language": "french"
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
x_offset | integer string |
The x offset. Can use expressions such as N%. If it is a number without further units, it will be considered as pixels. |
y_offset | integer string |
The y offset. Can use expressions such as N%. If it is a number without further units, it will be considered as pixels. |
time_offset_sec | number |
Specify a time offset (in seconds) in either direction. |
category | enum default forced sdh |
Optional: specify which source subtitle shall be rendered if multiple exist in the source asset. |
language | string |
Optional: specify which source subtitle shall be rendered if multiple exist in the source asset. |
font_size | string |
Optional: specify the font size to use when rendering subtitles. Usage of this setting means any font size settings that may already exist in the source subtitle are ignored. |
background_color | string |
Optional: specify the background color to use when rendering subtitles. Usage of this setting means any background color settings that may already exist in the source subtitle are ignored. |
imprint_style | enum auto closed_caption subtitle ttml |
The type of subtitle imprint to use. |
font_files | array | Allows referencing hosted font files, with an optional language specifier. |
lut_file | object | Allows referencing hosted LUT files, for example for SDR imprint into HDR video. |
lut_preset | enum r709_to_r2020_pq_300nit |
The LUT preset selection. |
is_optional | boolean | if set to true, the transcode will not fail if this media type did not exist in the source or no source subtitle could be located. |
Crop
transcode.targets.video.filters.payload.crop
Example Crop Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "crop",
"payload": {
"top": 20,
"bottom": 20,
"use_source_dimensions": true
}
}
]
},
"audio": [
]
}
]
}
}
Name | Type | Description |
---|---|---|
left | number string |
Number of pixels to crop from left. If '%' is appended, percentage of video width. "auto" uses the value reported from an up-stream black_borders analyzer task. |
right | number string |
Number of pixels to crop from right. If '%' is appended, percentage of video width. "auto" uses the value reported from an up-stream black_borders analyzer task. |
top | number string |
Number of pixels to crop from top. If '%' is appended, percentage of video height. "auto" uses the value reported from an up-stream black_borders analyzer task. |
bottom | number string |
Number of pixels to crop from bottom. If '%' is appended, percentage of video height. "auto" uses the value reported from an up-stream black_borders analyzer task. |
use_source_dimensions | boolean |
The default behavior for cropping is to use the output target dimensions. The source dimensions may be used by setting this flag. |
Ffmpeg
transcode.targets.video.filters.payload.ffmpeg
Example Ffmpeg Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "ffmpeg",
"payload": {
"ffmpeg_filter": "yadif=mode=0:deint=0"
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
support_files | array | Allows referencing support files (e.g. hosted font files). |
ffmpeg_filter | string |
This allows for a custom FFmpeg video filter string. See https://ffmpeg.org/ffmpeg-filters.html |
complex | boolean | Allows using filter_complex video filter string. |
Gaussian_blur
transcode.targets.video.filters.payload.gaussian_blur
Example Gaussian_blur Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "gaussian_blur",
"payload": {
"radius": "10"
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
radius | number |
Gaussian Blur Radius (in pixels). |
Text_overlay
transcode.targets.video.filters.payload.text_overlay
Example Text_overlay Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_text_overlay.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "text_overlay",
"payload": {
"x": 100,
"y": 100,
"opacity": 1,
"text": "YOUR TEXT HERE",
"font": "Times New Roman",
"font_color": "red",
"font_size": "h/5",
"duration_sec": 5,
"shadow_color": "blue",
"shadow_x": 0,
"shadow_y": 30
}
}
]
}
}
]
}
}
Name | Type | Description |
---|---|---|
x | integer string |
The x location to start the imprint. Can use expressions such as w-20 (w: width of the video). default: 25 |
y | integer string |
The y location to start the imprint. Can use expressions such as h-20 (h: height of the video). default: 25 |
opacity | number |
Opacity of the text overlay. 0 = fully transparent, 1 = fully opaque. default: 1 |
text | string | The text string to be drawn. |
text_file | object | Defines the location of the text file to be used. |
font | string | The font descriptor (compliant with fontconfig). Examples: 'Sans', 'URW Bookman L:style=Demi Bold Italic'. |
font_file | object | Defines the location of the font file to be used. |
font_color | string |
The color to be used for drawing fonts. Example: black: opaque black, green@0.8: green with 0.8 alpha. default: black |
font_size | integer string |
The font size in points. Can use expressions such as h/10 (w: width of the video, h: height of the video). A font size of 16 is the default. default: 16 |
tab_size | integer |
The size of the tab in number of spaces. The \t character is replaced with spaces. minimum: 1 default: 4 |
shadow_color | string |
The color to be used for drawing a shadow behind the drawn text. Example: black: opaque black, green@0.8: green with 0.8 alpha. default: black |
shadow_x | integer |
The x offset for the text shadow position with respect to the position of the text. It can be either positive or negative value. |
shadow_y | integer |
The y offset for the text shadow position with respect to the position of the text. It can be either positive or negative value. |
border_color | string |
The color to be used for drawing border around text. Example: black: opaque black, green@0.8: green with 0.8 alpha. |
border_size | integer |
The width of the border to be drawn around the text. |
background_color | string |
The color to be used for drawing box around text. Example: black: opaque black, green@0.8: green with 0.8 alpha. |
background_size | integer |
The width of the box to be drawn around the background. |
fix_bounds | boolean |
Correct text coords to avoid clipping. |
start_sec | number |
Start point (in seconds) of the overlay. |
duration_sec | number |
Duration (in seconds) of the overlay. |
fadein_duration_sec | number |
Fade-in time (in seconds) of the overlay. |
fadeout_duration_sec | number |
Fade-out time (in seconds) of the overlay. |
Text_file
transcode.targets.video.filters.payload.text_overlay.text_file
Example Text_file Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload":
{
"location":
{},
"targets":
[
{
"file_pattern": "{source_basename}_text_overlay.mp4",
"container":
{
"kind": "mp4"
},
"video":
{
"codec": "h264",
"filters":
[
{
"kind": "text_overlay",
"payload":
{
"x": 100,
"y": 200,
"opacity": 1,
"text_file":
{
"storage_provider": "s3",
"url": "{{source_path}}/overlay_text.txt"
},
"font": "Times New Roman",
"font_color": "red",
"font_size": "w/40",
"duration_sec": 5,
"border_color": "green@0.5",
"border_size": 5
}
}
]
}
}
]
}
}
Name | Type | Description |
---|---|---|
url | string | The URL of the asset. |
storage_provider | enum ftp sftp s3 gs box akamains swift swiftstack http |
The storage provider for the asset. [DESC] |
access | anyOf ftp sftp http s3 gs box akamains swift swiftstack ssh |
|
trim | anyOf by_sec_in_out by_sec_in_dur by_timecode by_asset_timecode by_frame_nr by_section_nr by_media_track by_nothing |
Object defining the type of trim operation to perform on an asset. |
encryption_id | string | Encryption id, used for referencing encryptions |
Font_file
transcode.targets.video.filters.payload.text_overlay.font_file
Example Font_file Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_text_overlay.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "text_overlay",
"payload": {
"x": 100,
"y": 400,
"opacity": 1,
"text": "YOUR TEXT HERE",
"font_file": {
"storage_provider": "s3",
"url": "{{font_file}}"
},
"font_color": "red",
"font_size": 60,
"duration_sec": 5,
"background_color": "white@0.5",
"background_size": 10
}
}
]
}
}
]
}
}
Name | Type | Description |
---|---|---|
url | string | The URL of the asset. |
storage_provider | enum ftp sftp s3 gs box akamains swift swiftstack http |
The storage provider for the asset. [DESC] |
access | anyOf ftp sftp http s3 gs box akamains swift swiftstack ssh |
|
trim | anyOf by_sec_in_out by_sec_in_dur by_timecode by_asset_timecode by_frame_nr by_section_nr by_media_track by_nothing |
Object defining the type of trim operation to perform on an asset. |
encryption_id | string | Encryption id, used for referencing encryptions |
Image_overlay
transcode.targets.video.filters.payload.image_overlay
Example Image_overlay Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "image_overlay",
"payload": {
"image_file": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.png"
},
"x": "overlay_w - 20",
"y": 0,
"height": "source_h / 2",
"opacity": 0.75,
"start_sec": 5,
"fadein_duration_sec": 10,
"duration_sec": 30
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
image_file | object | Defines the location of the image file to be used. |
x | integer string |
X position of the overlay. Can use expressions such as overlay_w-20 (overlay_w: width of the overlay). Default expression equals centered. default: (video_w-overlay_w)/2 |
y | integer string |
Y position of the overlay. Can use expressions such as overlay_h-20 (overlay_h: height of the overlay). Default expression equals centered. default: (video_h-overlay_h)/2 |
width | number string |
Width of the overlay. Can use expressions such as source_w (width of the image source). |
height | number string |
Height of the overlay. Can use expressions such as source_h (height of the image source). |
opacity | number |
Opacity of the overlay image. 0 = fully transparent, 1= fully opaque. default: 1 |
start_sec | number |
Start point (in seconds) of the overlay. |
duration_sec | number |
Duration (in seconds) of the overlay. |
fadein_duration_sec | number |
Fade-in time (in seconds) of the overlay. |
fadeout_duration_sec | number |
Fade-out time (in seconds) of the overlay. |
Video_overlay
transcode.targets.video.filters.payload.video_overlay
Example Video_overlay Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "video_overlay",
"payload": {
"video_file": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4"
},
"x": "overlay_w - 20",
"y": 0,
"height": "source_h / 2",
"opacity": 0.75,
"start_sec": 5,
"fadein_duration_sec": 10
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
video_file | object | The location of the video file to be overlaid on the output target. |
x | integer string |
X position of the overlay. Can use expressions such as overlay_w-20 (overlay_w: width of the overlay). default: 25 |
y | integer string |
Y position of the overlay. Can use expressions such as overlay_h-20 (overlay_h: height of the overlay). default: 25 |
width | number string |
Width of the overlay. Can use expressions such as source_w (width of the video source). |
height | number string |
Height of the overlay. Can use expressions such as source_h (height of the video source). |
opacity | number |
Opacity of the overlay image. 0 = fully transparent, 1= fully opaque. default: 1 |
start_sec | number |
Start point (in seconds) of the overlay. |
fadein_duration_sec | number |
Fade-in time (in seconds) of the overlay. |
duration_sec | number |
Duration (in seconds) of the overlay. |
fadeout_duration_sec | number |
Fade-out time (in seconds) of the overlay. |
repeat_count | integer |
How many times to repeat the video overlay on the primary video. To repeat indefinitely use "-1". 0 = no repeat, 1 = repeats the top video one time, 2 = repeats the top video two times. |
Telecine
transcode.targets.video.filters.payload.telecine
Example Telecine Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "telecine",
"payload": {
"interlace_mode": "tff",
"pattern": "23"
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
interlace_mode | enum tff bff |
The interlacing mode. tff: top field first. bff: bottom field first. default: tff |
pattern | enum 22 23 2332 222222222223 |
Specify the desired cadence pattern, 2332 is the default. 22 is a special case, causing interlacing without a frame rate change. default: 2332 |
Deinterlace
transcode.targets.video.filters.payload.deinterlace
Example Deinterlace Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"kind": "deinterlace",
"payload": {
"interlace_mode": "auto",
"motion_compensation": true,
"motion_compensation_quality": "high"
}
},
"audio": [
]
}
]
}
}
Name | Type | Description |
---|---|---|
interlace_mode | enum tff bff auto frame_metadata |
The source interlacing mode - auto means auto-detect. default: auto |
deinterlace_mode | enum 0 1 2 3 |
This setting controls Deinterlacing Method to be used. 0 — Temporal & Spatial Check: This setting maintains the number of frames, so interlaced video with 25fps converts to progressive video with 25fps. Note that motion might not look as fluid when using deinterlacing that maintains the framerate. 1 — Bob, Temporal & Spatial Check: This setting doubles the number of frames, so interlaced video with 25fps converts to progressive video with 50fps. Use this mode if you want to double the framerate of your video. 2 — Skip Spatial Temporal Check: This mode saves encoding time at the cost of visual quality. 3 — Bob, Skip Spatial Temporal Check: This mode saves encoding time at the cost of visual quality. default: 3 |
motion_compensation | boolean | Use a motion-compensated deinterlacer. Quality is better but CPU use will be significantly higher. |
motion_compensation_quality | enum low medium high veryhigh |
Quality settings for the motion-compensated deinterlacer. |
Fade
transcode.targets.video.filters.payload.fade
Example Fade Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "fade",
"payload": {
"mode": "in",
"start_sec": 0,
"duration_sec": 3
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
mode | enum in out |
The fade mode, in or out. default: in |
start_sec | number | Fade start time in seconds. If trim is used, time offset is calculated after trim is applied. |
duration_sec | number | Fade duration in seconds. |
Color_convert
transcode.targets.video.filters.payload.color_convert
Example Color_convert Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "color_convert",
"payload": {
"from": {
"ire_range_mode": "full",
"color_primaries": "bt2020",
"color_trc": "hlg",
"color_matrix": "bt2020c"
},
"to": {
"ire_range_mode": "limited",
"color_primaries": "bt709",
"color_trc": "gamma28",
"color_matrix": "bt709"
},
"nominal_peak_luminance": 1000
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
from | object | The color description for the source. |
to | object | The color description for the target. |
nominal_peak_luminance | number | The nominal peak luminance to be used during color format conversion. |
preset | enum hdr_hlg_to_sdr hdr_hlg_to_sdr_desat_mild hdr_hlg_to_sdr_desat_medium hdr_pq_to_sdr hdr_pq_to_sdr_desat_mild hdr_pq_to_sdr_desat_medium sdr_to_hdr_pq_200nit sdr_to_hdr_pq_300nit |
Common presets used for converting between color spaces. |
lut_file | object | The LUT file to be used during the color conversion. |
From
transcode.targets.video.filters.payload.color_convert.from
Example From Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "color_convert",
"payload": {
"from": {
"ire_range_mode": "full",
"color_primaries": "bt2020",
"color_trc": "hlg",
"color_matrix": "bt2020c"
},
"to": {
"ire_range_mode": "limited",
"color_primaries": "bt709",
"color_trc": "gamma28",
"color_matrix": "bt709"
},
"nominal_peak_luminance": 1000
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
ire_range_mode | enum auto full limited |
Chroma coordinate reference of the primaries. |
color_primaries | enum bt601 bt709 bt470m bt470bg smpte170m smpte240m bt2020 |
Chroma coordinate reference of the primaries. The default is determined by video size. |
color_trc | enum bt601 bt709 st2084 bt470bg smpte170m smpte240m smpte428 linear iec61966 2.1 bt2020_10bit bt2020_12bit hlg arib_stdb67 |
Color transfer characteristics. The default is determined by video size. |
color_matrix | enum bt470bg bt601 bt709 smpte170m smpte240m bt2020c bt2020nc smpte2085 |
YUV/YCbCr colorspace type. |
To
transcode.targets.video.filters.payload.color_convert.to
Example To Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"filters": [
{
"kind": "color_convert",
"payload": {
"from": {
"ire_range_mode": "full",
"color_primaries": "bt2020",
"color_trc": "hlg",
"color_matrix": "bt2020c"
},
"to": {
"ire_range_mode": "limited",
"color_primaries": "bt709",
"color_trc": "gamma28",
"color_matrix": "bt709"
},
"nominal_peak_luminance": 1000
}
}
]
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
ire_range_mode | enum auto full limited |
Chroma coordinate reference of the primaries. |
color_primaries | enum bt601 bt709 bt470m bt470bg smpte170m smpte240m bt2020 |
Chroma coordinate reference of the primaries. The default is determined by video size. |
color_trc | enum bt601 bt709 st2084 bt470bg smpte170m smpte240m smpte428 linear iec61966 2.1 bt2020_10bit bt2020_12bit hlg arib_stdb67 |
Color transfer characteristics. The default is determined by video size. |
color_matrix | enum bt470bg bt601 bt709 smpte170m smpte240m bt2020c bt2020nc smpte2085 |
YUV/YCbCr colorspace type. |
Scaler
transcode.targets.video.scaler
Example Scaler Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 1920,
"height": 1080,
"bitrate_kb": 6000,
"max_bitrate_kb": 8000,
"bitrate_mode": "vbr",
"scaler": {
"kind": "zscale",
"config_string": "dither=error_diffusion",
"apply_always": true
}
},
"audio": [
{
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
kind | enum default zscale |
The type of scaling to be applied. default: default |
algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for scaling operations. This will apply to both up-scale and down-scale. These may be set separately using the upscale_algorithm and downscale_algorithm parameters. |
upscale_algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for up-scaling operations. |
downscale_algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for down-scaling operations. |
config_string | string | The configuration string to be used with the specified scaling function. |
apply_always | boolean | Always use the specified scaling function. |
Audio
Example Audio Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_output{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mpegts"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "ac3",
"pid": 482,
"channels": 6,
"sample_rate": 48000,
"bitrate_kb": 384
},
{
"codec": "aac_lc",
"pid": 483,
"channels": 2,
"sample_rate": 48000,
"bitrate_kb": 128
}
]
}
]
}
}
Audio in Hybrik is represented by an array structure. The arrays are zero-based, meaning that the first audio track is represented by audio[0]. So, referencing the 3rd channel of the 2nd track of audio would be: audio[1], channel[2].
transcode.targets.audio
Name | Type | Description |
---|---|---|
include_if_source_has | array | This array allows for conditionally outputting tracks based on whether or not a specific input track exists. The tracks in the source are referred to by number reference: audio[0] refers to the first audio track. |
include_conditions | array | This array allows for conditionally including output audio tracks based on conditions in the input file. |
verify | boolean | Enable or disable post transcode verification for this track. default: true |
codec | enum copy aac mpeg2_aac mpeg4_aac aac_lc heaac_v1 heaac_v2 heaac_auto s302m mp2 pcm mp3 ac3 aiff alac flac eac3 vorbis opus dolby_digital dolby_digital_plus |
The audio codec to use. Selecting 'copy' will attempt to use the compressed source audio stream. |
codec_provider | enum default ffmpeg mainconcept_v10 mainconcept_v11 mainconcept_v13 |
The codec provider to be used for encoding. |
pid | integer |
The audio program ID. This is only used for MPEG transport streams. maximum: 8190 |
channels | integer |
The number of audio channels. minimum: 1 maximum: 16 |
dolby_digital_plus | object | The parameters for Dolby Digital Plus encoding. |
sample_size | enum 8 16 24 32 64 |
The audio sample size in bits. default: 24 |
sample_format | enum pcm_s8 pcm_u8 pcm_f16le pcm_f24le pcm_f32le pcm_f64le pcm_f16be pcm_f24be pcm_f32be pcm_f64be pcm_s16le pcm_s24le pcm_s32le pcm_s64le pcm_s16be pcm_s24be pcm_s32be pcm_s64be pcm_u16le pcm_u24le pcm_u32le pcm_u64le pcm_u16be pcm_u24be pcm_u32be pcm_u64be |
The audio sample format/description. |
sample_rate | integer |
The audio sample rate in Hz. Typical values are 44100 and 48000. Omit to use the source sample rate. |
bitrate_mode | enum cbr vbr |
Select between constant and variable bitrate encoding. Note that not all codecs support all bitrate modes. Omit this value to use the codec's default. |
bitrate_kb | number |
The audio bitrate in kilobits per second. This is the average bitrate in the case of vbr. Not all audio codecs support this setting. Omit to use codec's default. minimum: 1 maximum: 1024 |
min_bitrate_kb | number |
The minimum audio bitrate in kilobits per second. Valid for vbr only. minimum: 1 maximum: 1024 |
max_bitrate_kb | number |
The maximum audio bitrate in kilobits per second. Valid for vbr only. minimum: 1 maximum: 1024 |
language | string |
The audio language code. ISO-639 notation is preferred, but Hybrik will attempt to convert the passed language identifier. |
default_language | string |
The default audio language code. It is used when 'language' is not set and source language cannot be converted to valid ISO-639 notation. ISO-639 notation is preferred, but Hybrik will attempt to convert the passed language identifier. |
disposition | enum default dub original comment lyrics karaoke audio_description spoken_subtitles clean_audio |
The audio disposition. |
track_name | string |
The name of this audio track - will be used for mov files and MPEG-DASH (representation::id) for example. May be ignored, depending on your container format. |
track_group_id | string | This indicates which Group this track belongs to. Multiple tracks with the same content but different bitrates would have the same track_group_id. |
layer_id | string | This indicates which Layer this tracks belongs to. For example, this allows bundling one video layer and multiple audio layers with same bitrates but different languages. |
layer_affinities | array | This indicates which other layers this layer can be combined with. For example, to combine audio and video layers. |
filters | array | An array of audio filters that will be applied in order to the output audio. |
channel_designators | array enum unknown left right front_left front_right front_center back_left back_right front_left_of_center front_right_of_center back_center side_left side_right left_height right_height center lfe_screen left_surround right_surround left_center right_center center_surround left_surround_direct right_surround_direct top_center_surround vertical_height_left vertical_height_center vertical_height_right top_back_left top_back_center top_back_right top_front_left top_front_center top_front_right rear_surround_left rear_surround_right left_wide right_wide lfe2 left_total right_total hearing_impaired narration mono dialog_centric_mix center_surround_direct haptic headphones_left headphones_right click_track foreign_language discrete discrete_0 discrete_1 discrete_2 discrete_3 discrete_4 discrete_5 discrete_6 discrete_7 discrete_8 discrete_9 discrete_10 discrete_11 discrete_12 discrete_13 discrete_14 discrete_15 |
|
convert_aac_headers | enum adts_to_asc |
For solving aac transmux issues between mp4 and ts/raw tracks. |
aac_header_interval | integer | Allowing to solve specific hardware playback compliance problems. |
dialnorm | number string |
Dialogue Level (aka dialogue normalization or dialnorm) is the average dialogue level of a program over time, measured with an LAEq meter, referenced to 0 dBFS. |
mainconcept_stream_mux_options | string |
Provide direct stream instruction to the MainConcept multiplexer. Values are constructed as "prop=val,prop=val". See MainConcept documentation for valid values. |
mainconcept_audio_options | string |
MainConcept specific codec options - please reference the mainconcept codec documentation. |
mainconcept_audio_profile | enum MPEG1 MPEG2 DVB DVHS VCD SVCD DVD DVD_MPEG1 DVD_DVR DVD_DVR_MPEG1 MMV HDV_HD1 HDV_HD2 |
One of the preset values for profile (e.g. mpeg1). |
pcm_wrapping | enum raw bwf aes |
The type of wrapping to use for PCM audio tracks. |
ffmpeg_args | string |
The FFmpeg (target) command line arguments to be used. Note that these will override competing settings in the JSON. |
Dolby Digital Plus
transcode.targets.audio.dolby_digital_plus
Example Dolby_digital_plus Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_output{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "dolby_digital_plus",
"channels": 2,
"sample_rate": 48000,
"dolby_digital_plus": {
"bitstream_mode": "complete_main",
"surround_attenuation_3_db": true
}
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
constraints | enum broadcast_atsc broadcast_ebu streaming bluray |
The target output constraint to use for Dolby Digital encoding. |
audio_coding_mode | enum 3/2 3/2L 3/1 3/1L 3/0 3/0L 2/2 2/1 2/0 1/0 |
Defines the number of full bandwidth audio channels being encoded. |
bitstream_mode | enum complete_main music_and_effects visually_impaired hearing_impaired dialogue commentary emergency voice |
Type of audio bitstream being processed. 'complete_main' and 'music_and_effects' are main audio services; the rest are associated audio services. |
dialnorm | number | Dialogue Level (aka dialogue normalization or dialnorm) is the average dialogue level of a program over time, measured with an LAEq meter, referenced to 0 dBFS. This is the equivalent of loudness_target in DPLC, with a range of -31dB .. -1dB) minimum: -31 maximum: -1 |
room_type | enum not_indicated small large |
The room type intended for the decoding. |
phase_shift_90_degree | boolean | A 90º phase shift can be applied to the surround channels during encoding. This is useful for generating multichannel bitstreams which, when downmixed, can create a true Dolby Surround compatible output (Lt/Rt). Default setting is true. |
dolby_surround_mode | enum not_indicated enabled |
Indicates to a Dolby Digital decoding product whether the two-channel encoded bitstream requires Pro Logic decoding. |
dolby_digital_surround_ex_mode | enum not_indicated enabled |
This parameter is used to identify the encoded audio as material encoded in Surround EX. This parameter is only used if the encoded audio has two Surround channels. |
surround_attenuation_3_db | boolean | –3 dB attenuation can be used to reduce the levels of the surround channels to compensate between the calibration of film dubbing stages and consumer replay environments. The surround channels in film studios are set 3 dB lower than the front channels (unlike consumer applications of 5.1), leading to the level on tape being 3 dB higher. Apply the 3 dB attenuation when using a master mixed in a film room. |
dynamic_range_control | object | Dynamic Range Presets allow the user to select the compression characteristic that is applied to the Dolby Digital bitstream during decoding. These compression presets aid playback in less-than-ideal listening environments. |
stereo_downmix_preference | object | The settings for the final downmix. |
Dynamic_range_control
transcode.targets.audio.dolby_digital_plus.dynamic_range_control
Example Dynamic_range_control Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_output{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "dolby_digital_plus",
"channels": 2,
"sample_rate": 48000,
"dolby_digital": {
"bitstream_mode": "complete_main",
"surround_attenuation_3_db": true,
"dynamic_range_control": {
"line_mode_profile": "speech"
}
}
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
line_mode_profile | enum none film_std film_light music_std music_light speech |
Settings for line compression mode. |
rf_mode_profile | enum none film_std film_light music_std music_light speech |
Settings for RF compression mode. |
Stereo_downmix_preference
transcode.targets.audio.dolby_digital_plus.stereo_downmix_preference
Example Stereo_downmix_preference Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_output{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "dolby_digital_plus",
"channels": 2,
"sample_rate": 48000,
"dolby_digital_plus": {
"bitstream_mode": "complete_main",
"surround_attenuation_3_db": true,
"stereo_down_mix_preference": {
"loro_center_mix_level": -3,
"loro_surround_mix_level": -3,
"ltrt_center_mix_level": -3,
"ltrt_surround_mix_level": -3,
"preferred_downmix_mode": "loro"
}
}
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
mode | enum not_indicated lo/ro lt/rt pro_logic_2 |
The mode for the downmix. |
ltrt_center_mix_level | number string |
The LtRt Center mix level. |
ltrt_sur_mix_level | number string |
The LtRt Surround mix level |
loro_center_mix_level | number string |
The LoRo Center mix level. |
loro_sur_mix_level | number string |
The LoRo Center mix level. |
Audio Filters
Example Audio Filters Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264"
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"bitrate_kb": 256,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "ebur128",
"payload": {
"allow_unprecise_mode": true,
"integrated_lufs": -16,
"true_peak_dbfs": -3
}
}
}
]
},
{
"codec": "pcm",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 512,
"filters": [
{
"kind": "fade",
"payload": {
"mode": "in",
"start_sec": 0,
"duration_sec": 3
}
}
]
}
]
}
]
}
}
Hybrik supports a number of audio filters, including controlling levels, normalization, and fading. Hybrik also suppors the application of standard FFmpeg audio filters.
transcode.targets.audio.filters
Name | Type | Description |
---|---|---|
kind | enum ffmpeg level normalize fade kantar_watermarking |
The type of audio filter being applied. default: ffmpeg |
options | object | The options for where in the transcoding pipeline the filter will be applied. |
include_conditions | array | Specifies conditions under which this filter will be applied. Can use Javascript math.js nomenclature |
payload | anyOf ffmpeg level normalize fade kantar_watermarking |
Configuration options for the specified audio filter. |
Options
transcode.targets.audio.filters.options
Example Options Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264"
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"bitrate_kb": 256,
"filters": [
{
"kind": "level",
"options": {
"position": "pre_normalize"
},
"payload": {
"kind": "ebur128",
"payload": {
"factor": 1.5
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
position | enum pre_analyze pre_normalize post_normalize pre_convert post_convert default |
Specifies where in the transcode process source pipeline filters will be applied. default: default |
Ffmpeg
transcode.targets.audio.filters.payload.ffmpeg
Example Ffmpeg Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 48000,
"filters": [
{
"kind": "ffmpeg",
"payload": {
"ffmpeg_filter": "adelay=1500|0|500"
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
support_files | array | Support files referenced inside of filter descriptions. |
ffmpeg_filter | string |
Set the arguments for custom FFmpeg audio filter processing. See https://ffmpeg.org/ffmpeg-filters.html |
Level
transcode.targets.audio.filters.payload.level
Example Level Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "level",
"payload": {
"factor": 1.5
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
factor | number | Multiplication factor. May lead to clipping. |
Normalize
transcode.targets.audio.filters.payload.normalize
Example Normalize Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "peak",
"payload": {
"peak_level_db": -3
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
kind | enum ebur128 peak rms dynamic dolby_professional_loudness |
All methods except dynamic will either use analyze values supplied here, or force 2-pass encoding to determine accurate levels for adjusting them in a later pass. default: ebur128 |
payload | anyOf ebur128 peak rms dynamic dolby_professional_loudness |
The payload for the audio normalize filter. |
Ebur128
transcode.targets.audio.filters.payload.normalize.payload.ebur128
Example Ebur128 Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "ebur128",
"payload": {
"integrated_lufs": -16,
"true_peak_dbfs": -3,
"allow_unprecise_mode": false
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
source | object | EBU R.128 substitute analysis results for cases where the analysis was run separately. |
integrated_lufs | number | LUFS = Loudness Units Full Scale. The Integrated value means the loudness integrated over the entire length of the program. European TV applications have a recommended level of -23 LUFS. Web services like iTunes and YouTube have targets of -16 and -14 respectively. |
loudness_lra_lufs | number | LRA = Loudness Range. This quantifies the statistical distribution of short-term loudness within a program. A low LRA (-1 to -3) indicates material with a narrow dynamic range. |
true_peak_dbfs | number | True Peak indicates whether there is maximum intersample peaking. |
allow_unprecise_mode | boolean | Using unprecise mode allows for normalization without running an analysis first. As you might guess, this is less precise than running an EBU.R128 analysis as part of an Analyzer Task first. |
analyzer_track_index | integer | This specifies which analyzer track data to use for this filter. |
is_optional | boolean | if set to true, the transcode will not fail if this media type did not exist in the source. |
Source
transcode.targets.audio.filters.payload.normalize.payload.ebur128.source
Example Source Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "ebur128",
"payload": {
"source": {
"integrated_lufs": -20,
"true_peak_dbfs": -1,
},
"integrated_lufs": -16,
"true_peak_dbfs": -3,
"allow_unprecise_mode": false
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
integrated_lufs | number | LUFS = Loudness Units Full Scale. The Integrated value means the loudness integrated over the entire length of the program. European TV applications have a recommended level of -23 LUFS. Web services like iTunes and YouTube have targets of -16 and -14 respectively. |
loudness_lra_lufs | number | LRA = Loudness Range. This quantifies the statistical distribution of short-term loudness within a program. A low LRA (-1 to -3) indicates material with a narrow dynamic range. |
true_peak_dbfs | number | True Peak indicates whether there is maximum intersample peaking. |
integrated_threshold_lufs | number | Sets threshold value for integrated LUFS. |
offset | number | DC offset. |
Peak
transcode.targets.audio.filters.payload.normalize.payload.peak
Example Peak Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "peak",
"payload": {
"peak_level_db": -3
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
source | object | Normalization analysis results for cases where the analysis was run separately. |
peak_level_db | number | Audio peak level in decibels (dB). |
analyzer_track_index | integer | Limits analyzer to run on a particular track. |
is_optional | boolean | if set to true, the transcode will not fail if this media type did not exist in the source. |
Source
transcode.targets.audio.filters.payload.normalize.payload.peak.source
Example Source Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "peak",
"payload": {
"source": {
"peak_level_db": -1
},
"peak_level_db": -3
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
peak_level_db | number | Audio peak level in decibels (dB). |
Rms
transcode.targets.audio.filters.payload.normalize.payload.rms
Example Rms Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "rms",
"payload": {
"rms_level_db": -3
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
source | object | Normalization analysis results for cases where the analysis was run separately. |
rms_level_db | number | The RMS level in dB. |
analyzer_track_index | integer | This specifies which audio track to analyze. |
is_optional | boolean | If set to true, the transcode will not fail if this media type did not exist in the source. |
Source
transcode.targets.audio.filters.payload.normalize.payload.rms.source
Example Source Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "rms",
"payload": {
"source": {
"rms_level_db": -3
},
"rms_level_db": -1
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
rms_level_db | number | The RMS level in dB. |
Dynamic
transcode.targets.audio.filters.payload.normalize.payload.dynamic
Example Dynamic Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "dynamic",
"payload": {
"peak_level_db": -3,
"window_size_samples": 256
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
peak_level_db | number | The Peak Level in dB. |
rms_level_db | number | The RMS level in dB. |
window_size_samples | integer | The window size (in samples) to use for the RMS calculation. |
Dolby_professional_loudness
transcode.targets.audio.filters.payload.normalize.payload.dolby_professional_loudness
Example Dolby_professional_loudness Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "ac3",
"channels": 6,
"filters": [
{
"kind": "normalize",
"payload": {
"kind": "dolby_professional_loudness",
"payload": {
"regulation_type": "atsc_a85_fixed",
"integrated_lufs": -16,
"dialogue_intelligence": true
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
is_optional | boolean | If set to true, the transcode will not fail if this media type did not exist in the source. |
correction_mode | enum pcm_normalization metadata_update |
The Dolby Professional Loudness Correction mode. |
use_dialogue_intelligence | boolean | Dolby Dialogue Intelligence enabled. |
regulation_type | enum atsc_a85_fixed atsc_a85_agile ebu_r128 freetv_op59 arib_tr_b32 manual |
The type of regulation to use for Dolby Professional Loudness Correction. |
loudness_target | number | The loudness LUFS target (-31..-8 dB). This is the equivalent of dialnorm in the Dolby Digital encoder. minimum: -31 maximum: -8 |
speech_detection_threshold | integer | The speech detection threshold (0..100, increments of 1). maximum: 100 |
limit_mode | enum true_peak sample_peak |
Specifies whether true peak or sample peak is used as the basis for leveling. |
peak_limit_db | number | The peak value in dB to use for loudness correction, -12 to -0.1 dBTP (in increments of 0.1 dBTP). minimum: -12 maximum: -0.1 |
analyzer_track_index | integer | This specifies which analyzer track data to use for this filter. |
Fade
transcode.targets.audio.filters.payload.fade
Example Fade Object
{
"uid": "transcode_media",
"kind": "transcode",
"payload": {
"location": {
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"container": {
"kind": "mp4"
},
"video": {
},
"audio": [
{
"codec": "heaac_v2",
"channels": 4,
"sample_rate": 48000,
"filters": [
{
"kind": "fade",
"payload": {
"mode": "in",
"start_sec": 0,
"duration_sec": 3
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
mode | enum in out |
The fade mode, in or out. default: in |
start_sec | number | Fade start time in seconds. |
duration_sec | number | Fade duration in seconds. |
curve | enum sinus linear |
The fade curve type. default: sinus |
Kantar_watermarking
transcode.targets.audio.filters.payload.kantar_watermarking
Example Kantar_watermarking Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {},
"targets": [
{
"file_pattern": "{source_basename}_kantar.wav",
"container": {
"kind": "wav"
},
"audio": [
{
"codec": "pcm",
"channels": 2,
"filters": [
{
"kind": "kantar_watermarking",
"payload": {
"username": "Replace.This",
"password": "RePlAcEtHiS",
"content_id": "some_text",
"channel_name": "whatever",
"license_id": 123456,
"offset": 0,
"port": 443,
"url": "licenseofe.kantarmedia.com",
"metadata": {
"metadata_3": "1111"
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
username | string | Login for authorization. |
password | string | Password for authorization. |
content_id | string | Identification string. |
channel_name | string | The channel name. If it is not available, it gets the first value from the Kantar server. |
license_id | number | The license id. If it is not available, it gets the first value from the Kantar server. |
offset | number | Optional: the audio offset in seconds. |
port | number | Optional: the ability to manually define the server port. |
url | string | Optional: the ability to manually define the server URL. |
metadata | object | Optional: the ability to set metadata (descriptions). |
results_file | object | The location for the output Kantar watermark log data. |
Results_file
transcode.targets.audio.filters.payload.kantar_watermarking.results_file
Example Results_file Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {},
"targets": [
{
"file_pattern": "{source_basename}_kantar.wav",
"container": {
"kind": "wav"
},
"audio": [
{
"codec": "pcm",
"channels": 2,
"filters": [
{
"kind": "kantar_watermarking",
"payload": {},
"results_file": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"file_pattern": "log_kantar.xml"
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
location | object | Location to store the result file information. |
file_pattern | string | The file pattern for the result file information. |
Timecode
Example Timecode Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted{default_extension}",
"existing_files": "replace",
"timecode": [
{
"source": "start_value",
"start_value": "01:00:00;00"
}
],
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264",
"frame_rate": "30000/1001"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 96
}
]
}
]
}
}
transcode.targets.timecode
Name | Type | Description |
---|---|---|
include_if_source_has | array | This array allows for conditionally outputting tracks based on whether or not a specific input track exists. The tracks in the source are referred to by number reference: timecode[0] refers to the first timecode track. |
include_conditions | array | An array defining the include conditions for this time code track. |
verify | boolean | Enable or disable post transcode verification for this track. default: true |
source | enum auto start_value media |
The source to be used for time code data. A specific value can be forced by selecting start_value. |
source_timecode_selector | enum first highest lowest mxf gop sdti smpte material_package source_package |
Specifies the metadata track to be used for time code data. [DESC] default: first |
start_value | string |
Start time code, use hh:mm:ss:nr (non-drop) or hh:mm:ss;nr (drop). |
timecode_frame_rate | string |
Start time code, use hh:mm:ss:nr (non-drop) or hh:mm:ss;nr (drop). |
force_drop | boolean |
Forces time code interpretation to be drop-frame. |
inherit_source_ndf_df | boolean |
Inherit NDF / DF mode from source time code |
no_recalculation | boolean |
Disable timecode recalculation when frame rate changes |
Metadata
Example Metadata Object
{
"file_pattern": "{source_basename}_converted.mov",
"existing_files": "replace",
"container": {
"kind": "mov"
},
"video": {
"codec": "prores",
"profile": "ap4x",
"width": 3840,
"height": 2160,
"dar": "16/9",
"frame_rate": "24000/1001",
"interlace_mode": "progressive",
"chroma_format": "yuv444p12le",
"color_primaries": "bt2020",
"color_trc": "st2084",
"color_matrix": "bt2020nc"
},
"metadata": [
{
"format": "dolbyvision_metadata",
"file_pattern": "{source_basename}_converted.xml",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
}
}
]
}
transcode.targets.metadata
Name | Type | Description |
---|---|---|
format | enum dolbyvision_metadata |
The metadata format to use. |
location | object | A location that overrides any location defined within the parents of this encode target. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
existing_files | enum delete_and_replace replace replace_late rename_new rename_org fail |
The desired behavior when a target file already exists. "replace": will delete the original file and write the new one. "rename_new": gives the new file a different auto-generated name. "rename_org": renames the original file. Note that renaming the original file may not be possible depending on the target location. "delete_and_replace": attempts to immediately delete the original file. This will allow for fast failure in the case of inadequate permissions. "replace_late": does not attempt to delete the original -- simply executes a write. default: fail |
Subtitle
Example Subtitle Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 96
}
],
"subtitle": [
{
"source_map": "use_if_exists",
"format": "scc",
"language": "en"
}
]
}
]
}
}
transcode.targets.subtitle
Name | Type | Description |
---|---|---|
include_if_source_has | array | This array allows for conditionally outputting tracks based on whether or not a specific input tracks exists. The tracks in the source are referred to by number reference: subtitle[0] refers to the first subtitle track. |
include_conditions | array | Specifies conditions under which this subtitle will be used. Can use Javascript math.js nomenclature |
verify | boolean | Enable or disable post transcode verification for this track. default: true |
source_map | enum use_if_exists source_or_empty required_in_source |
This specifies the behavior to use when creating a subtitle track. Selecting source_or_empty will use the source's subtitle data or create an empty track if this data does not exist. default: required_in_source |
format | enum webvtt srt stl ass ttml imsc1 dvbsub scc timed_text |
The subtitle format to use. |
pid | integer |
The video program ID - only used for MPEG transport streams. maximum: 8190 |
language | string | The ISO 639.2 three letter code for the language of the subtitle. |
override_language_code | string | Override default language code (sush as ttml). |
track_group_id | string | This indicates which Group this track belongs to. Multiple tracks with the same content but different bitrates would have the same track_group_id. |
layer_id | string | This indicates which Layer this tracks belongs to. For example, this allows bundling one video layer and multiple audio layers with same bitrates but different languages. |
layer_affinities | array | This indicates which other layers this layer can be combined with. For example, to combine audio and video layers. |
match_source_language | boolean | If true, sets the output subtitle track language to be the same as the language of the subtitle track in the source. |
dvb_options | object | The options for European Digital Video Broadcasting (DVB) subtitles. |
webvtt_options | object | Options for the WebVTT output. |
scc_options | object | Options for the SCC output. |
mainconcept_stream_mux_options | string |
Provide direct stream instruction to the MainConcept multiplexer. Values are constructed as "prop=val,prop=val". See MainConcept documentation for valid values. |
profile | object | Options for profile, specific only for subtitles. |
frame_rate | number string |
Frame rate for output subtitles, overwrites values from source video. |
Dvb_options
transcode.targets.subtitle.dvb_options
Example Dvb_options Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 96
}
],
"subtitle": [
{
"source_map": "use_if_exists",
"format": "dvbsub",
"language": "en",
"dvb_options": {
"page_id": 3,
"use_full_width_regions": true,
"video_start_timecode": "01:00:00:00",
"width_overide": 1280,
"height_overide": 720
}
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
sd_in_hd | boolean | Positions SD subtitles correctly in an HD frame. |
use_df_timecode | boolean | Uses drop-frame timecode timing for the subtitles. |
page_id | integer | Specifies the page_id for the subtitle. |
dvb_subtitling_type | integer | Specifies the type of DVB subtitling. |
dvb_skip_dds | boolean | Skips a Display Definition Segment. |
video_start_timecode | string | The start timecode for the subtitle. |
color_depth_bits | integer | The bit depth for the subtitle. |
use_region_fill_flag | boolean | Flag setting whether to fill the defined region. The fill is completed before any text is rendered. |
use_full_width_regions | boolean | Flag setting whether the region should encompass the entire width of the screen. No two regions can be presented horizontally next to each other. |
use_full_width_objects | boolean | Flag setting whether objects should encompass the entire width of the screen. |
non_empty_pcs_on_hide | boolean | Flag to send non-empty Page Composition Segment on subtitle hide command. |
send_empty_bitmap_on_hide | boolean | Flag to send empty bitmap on hide command. |
use_transparent_color_0 | boolean | Flag to set color0 to be transparent. |
width_overide | integer | Override the width of the subtitle with this value. |
height_overide | integer | Override the height of the subtitle with this value. |
dar_overide | number | Override the Display Aspect Ratio of the subtitle with this value. |
font_height | integer | The height in pixels for the subtitle font. |
outline_size | integer | The thickness of the outline to use around the subtitle font. |
bold | boolean | Flag to set the font to bold. |
Webvtt_options
transcode.targets.subtitle.webvtt_options
Example Webvtt_options Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 96
}
],
"subtitle": [
{
"source_map": "use_if_exists",
"format": "webvtt",
"language": "en",
"webvtt_options": {
"cue_numbering": true
}
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
cue_numbering | boolean | Flag to turn on cue numbering in the WebVTT output. |
Scc_options
transcode.targets.subtitle.scc_options
Example Scc_options Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 96
}
],
"subtitle": [
{
"source_map": "use_if_exists",
"format": "webvtt",
"language": "en",
"scc_options": {
"drop_frame": true
}
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
drop_frame | boolean | Flag to toggle drop/non-drop time code in scc. |
Profile
transcode.targets.subtitle.profile
Example Profile Object
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted{default_extension}",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 96
}
],
"subtitle": [
{
"source_map": "use_if_exists",
"format": "scc",
"language": "en",
"profile": {
"name": "itt",
"region": "bottom"
}
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
name | enum itt |
Subtitle profile name. |
region | enum top bottom |
Determines on which to replace unsupported regions. |
Analyze Task
Analyze
Example Analyze Object
{
"name": "Hybrik Analyze Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4"
}
}
},
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"volume": {
"enabled": true
},
"levels": {
"enabled": true
}
}
],
"video": {
"black": {
"duration_sec": 5,
"enabled": true
},
"black_borders": {
"black_level": 0.08,
"enabled": true
},
"interlacing": {
"enabled": true
},
"levels": {
"chroma_levels": true,
"histograms": true,
"enabled": true
}
}
}
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "analyze_task"
}
]
}
}
]
}
}
One of the features built into Hybrik is to analyze the properties of a file. There are two types of analyis that can be performed -- general_properties and deep_properties. The general_properties include things like file size, video format, number of channels, etc. This metadata that can be analyzed very quickly without actually decoding the file. In contrast, the deep_properties required decoding the file in order to analyze individual video and audio streams. The types of things that can be analyzed in the deep_properties includes things like black_detection, silence_detection, PSNR, VMAF, etc. The analysis task returns information about the file, but does not make any "judgement" about whether the information represents a good or bad file. If you would like to have your workflow report errors or warnings based on the analysis, you can run a QC Task after the Analyze Task. The Analyze Task results are embedded in the job results. The results include a wide set of information depending on the type of analysis. Typically, the returned values include minimum, maximum, and mean values, as well as the frame location of min/max values. Additionally for analysis types that have time-varying results (like datarate), a set of 100 samples distributed over the duration of the file will be returned for graphing purposes.
analyze
Name | Type | Description |
---|---|---|
options | object | Options for the Analyze Task. |
source_pipeline | object | Pipeline modifications (such as trimming) to apply prior to running the analysis task. |
compare_asset | object | Compare asset to use for comparative analyzers. |
general_properties | object | Metadata analysis of a file or stream. Does not require decoding of the asset |
deep_properties | object | Object specifying which deep properties to analyze. Deep properties require decoding the asset. |
reports | array | An object or array of objects describing the location and creation conditions for reports. |
overrides | object | Choose which options should be overridden during analysis. |
Options
Example Options Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"options": {
"quick_scan": {
"include_start": true,
"include_end": true,
"nr_of_slices": 10,
"slice_duration_sec": 10
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"black_borders": {
"black_level": 0.08,
"enabled": true
}
}
}
}
}
analyze.options
Name | Type | Description |
---|---|---|
asset_db_cache | boolean | Enable analyze result database caching. |
report_version | integer | Analyze report version, to preserve legacy elements for existing code parsing. |
display_paths | boolean | Show the full file path in the PDF report. default: true |
quick_scan | object | Analyzer QuickScan properties. |
response_version | enum 1 2 |
which version of Analyzer job summary JSON to generate |
Quick_scan
Example Quick_scan Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"options": {
"quick_scan": {
"slice_duration_sec": 10,
"slice_interval_sec": 100
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": {
},
"video": {
}
}
}
}
analyze.options.quick_scan
Name | Type | Description |
---|---|---|
include_start | boolean | Forces the slices to include the start of the file. |
include_end | boolean | Forces the slices to include the end of the file. |
nr_of_slices | integer | Number of slices to be included in the scan. |
slice_duration_sec | number | The duration in seconds of each slice. |
slice_interval_sec | number | The interval between slices in seconds. |
coverage_percent | number | The amount of the file to cover. 1 = 1%, 100 = 100%. |
scan_intervals | array | An array specifying specific intervals to be scanned. |
Source_pipeline
Example Source_pipeline Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"source_pipeline": {
"trim": {
"inpoint_sec": 60,
"outpoint_sec": 120
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": {
"ebur128": {
"enabled": true,
"scale_meter": 9
}
}
}
}
}
analyze.source_pipeline
Name | Type | Description |
---|---|---|
trim | anyOf by_sec_in_out by_sec_in_dur by_timecode by_asset_timecode by_frame_nr by_section_nr by_media_track by_nothing |
Object defining the type of trim operation to perform on an asset. |
accelerated_prores | boolean |
Flag to use native, accelerated, Apple ProRes decoder. Default true. |
scaler | object | The type of function to be used in scaling operations. |
skip_video_decoding | boolean |
Flag to forcefully skip video decoding during analysis. Default is false. |
Scaler
Example Scaler Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"source_pipeline": {
"scaler": {
"kind": "zscale",
"config_string": "dither=error_diffusion",
"apply_always": true
}
}
}
}
analyze.source_pipeline.scaler
Name | Type | Description |
---|---|---|
kind | enum default zscale |
The type of scaling to be applied. default: default |
algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for scaling operations. This will apply to both up-scale and down-scale. These may be set separately using the upscale_algorithm and downscale_algorithm parameters. |
upscale_algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for up-scaling operations. |
downscale_algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for down-scaling operations. |
config_string | string | The configuration string to be used with the specified scaling function. |
apply_always | boolean | Always use the specified scaling function. |
Compare_asset
Example Compare_asset Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": {
"levels": {
"enabled": true
}
},
"video": {
"settings": {
"comparative": {
"size_selector": "config",
"width": 1280,
"height": 720
}
},
"vmaf": {
"enabled": true
}
}
}
}
analyze.compare_asset
Name | Type | Description |
---|---|---|
kind | enum asset_url asset_urls asset_complex |
The type of asset. asset_url is a single asset, asset_urls is an array of assets, and asset_complex is an asset assembled from multiple components. |
payload | anyOf asset_url asset_urls |
The asset description payload. |
General_properties
Example General_properties Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": {
"volume": {
"enabled": true
}
},
"video": {
"black": {
"duration_sec": 5,
"enabled": true
}
}
}
}
}
analyze.general_properties
Name | Type | Description |
---|---|---|
enabled | boolean | Enable this analyze operation. |
mov_atom_descriptor_style | enum none condensed by_track full |
"none": do not list atoms. "condensed": pick the most important atoms and list linearly with the belonging tracks. "by_track": show the full hierarchy but list along with tracks. "full": show the full file hierarchy in the asset element. |
Deep_properties
There are a variety of deep_properties that can be analyzed. These properties are broken into audio and video sections. You may have multiple types of simultaneous audio and video analyzers running as part of a single task. You may also have different audio analyzers being performed on different audio tracks. In the same way that audio tracks are represented by an array, the audio analyzers consist of a corresponding array.
Example Deep_properties Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"levels": {
"enabled": true
},
"volume": {
"enabled": true
},
"ebur128": {
"enabled": true
},
"silence": {
"enabled": true,
"noise_db": -60,
"duration_sec": 1
}
}
],
"video": {
"black": {
"enabled": true,
"black_level": 0.03,
"duration_sec": 1
},
"black_borders": {
"enabled": true
},
"interlace_mode": {
"enabled": true
},
"levels": {
"enabled": true
},
"blockiness": {
"algorithm": "hybm",
"enabled": true,
"violation_frame_window": 5,
"violation_threshold": 0.2,
"max_report_violations": 5
}
}
}
}
}
analyze.deep_properties
Name | Type | Description |
---|---|---|
audio | array | Audio tracks are represented by an array, where each element of the array refers to a track. The deep properties analysis is also an array of objects. Each object in the array consists of the analyses that will be performed on the corresponding audio track. |
video | object | Object containing all of the video deep_properties analyses to be performed. |
Audio
Example Audio Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"track_selector": {
"index": 2
},
"volume": {
"enabled": true
},
"levels": {
"enabled": true
}
}
],
"video": {
"black": {
"duration_sec": 5,
"enabled": true
}
}
}
}
}
analyze.deep_properties.audio
Name | Type | Description |
---|---|---|
track_selector | object | Mechanism to select a specific audio track. |
levels | object | Performs a deep analysis of the audio track(s), including DC offset, RMS peak, level etc. |
ebur128 | object | Performs a EBU R.128 loudness determination on the audio track(s). |
dolby_professional_loudness | object | Performs Dolby loudness analysis on the audio track(s). |
loudness | object | Performs loudness analysis on the audio track(s) and draws temporal loudness on charts. |
volume | object | Uses simple volume measurement. This is less precise than using deep_stats but of higher performance. |
silence | object | Detect silent segments in the audio track(s). |
channel_compare | object | Find similarity between audio channels higher than defined threshold, by comparing their spectral components. |
phase_mismatch | object | Find phase mismatches between audio channels. |
psnr | object | Determine the PSNR value between an asset and a reference. |
emergency_alert | object | Detect emergency alert signals in the audio track(s). |
Track_selector
analyze.deep_properties.audio.track_selector
Example Track_selector Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"track_selector": {
"pid": 381
},
"volume": {
"enabled": true
},
"levels": {
"enabled": true
}
}
],
"video": {
"black": {
"duration_sec": 5,
"enabled": true
}
}
}
}
}
Name | Type | Description |
---|---|---|
id | integer | The ID of the audio track to be analyzed. |
pid | integer | The PID of the audio track to be analyzed. |
index | integer | The index (0-based) of the audio track to be analyzed. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
Levels
analyze.deep_properties.audio.levels
Example Levels Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"deep_properties": {
"audio": [
{
"levels": {
"enabled": true
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type does not exist in the source. |
window_length_sec | number | Configures the RMS measurement window size. minimum: 0.01 maximum: 10 default: 0.05 |
Ebur128
analyze.deep_properties.audio.ebur128
Example Ebur128 Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"ebur128": {
"enabled": true,
"scale_meter": 9
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
scale_meter | number | EBU R.128 scale meter, common values are 9 and 18, default is 9. minimum: 9 maximum: 18 default: 9 |
target_reference | object | The LUFS and true peak values to used for EBU R.128 normalization. |
Target_reference
analyze.deep_properties.audio.ebur128.target_reference
Example Target_reference Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"ebur128": {
"enabled": true,
"scale_meter": 18,
"target_reference": {
"integrated_lufs": -16.45,
"loudness_lra_lufs": -12.6,
"true_peak_dbfs": -3.5
}
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
integrated_lufs | number | Reference integrated LUFS value. |
loudness_lra_lufs | number | Reference LRA LUFS value. |
true_peak_dbfs | number | Reference true peak DBFS value. |
Dolby_professional_loudness
analyze.deep_properties.audio.dolby_professional_loudness
Example Dolby_professional_loudness Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"deep_properties": {
"audio": [
{
"dolby_professional_loudness": {
"enabled": true,
"regulation_type": "ebu_r128"
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
correction_mode | enum pcm_normalization metadata_update |
The Dolby Professional Loudness Correction mode. |
use_dialogue_intelligence | boolean | Dolby Dialogue Intelligence enabled. |
regulation_type | enum atsc_a85_fixed atsc_a85_agile ebu_r128 freetv_op59 arib_tr_b32 manual |
The type of regulation to use for Dolby Professional Loudness Correction. |
loudness_target | number | The loudness LUFS target (-31..-8 dB). This is the equivalent of dialnorm in the Dolby Digital encoder. minimum: -31 maximum: -8 |
speech_detection_threshold | integer | The speech detection threshold (0..100, increments of 1). maximum: 100 |
limit_mode | enum true_peak sample_peak |
Specifies whether true peak or sample peak is used as the basis for leveling. |
peak_limit_db | number | The peak value in dB to use for loudness correction, -12 to -0.1 dBTP (in increments of 0.1 dBTP). minimum: -12 maximum: -0.1 |
Loudness
analyze.deep_properties.audio.loudness
Example Loudness Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"deep_properties": {
"audio": [
{
"loudness": {
"enabled": true,
"preset": "atsc_a85_fixed"
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
results_file | object | The location for the output Audio loudness analysis data. |
preset | enum atsc_a85_fixed atsc_a85_agile ebu_r128 freetv_op59 arib_tr_b32 |
The preset used for integrated results calculations. It does not affect chart data. default: ebu_r128 |
Results_file
analyze.deep_properties.audio.loudness.results_file
Example Results_file Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"deep_properties": {
"audio": [
{
"loudness": {
"enabled": true,
"preset": "atsc_a85_fixed",
"results_file": {
"file_pattern": "{source_basename}_loudness_analysis.txt",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket"
}
}
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
location | object | The location for the output Audio loudness analysis data. |
file_pattern | string | The file pattern for the output Audio loudness analysis data. |
Volume
analyze.deep_properties.audio.volume
Example Volume Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"volume": {
"enabled": true,
"is_optional": true
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
Silence
analyze.deep_properties.audio.silence
Example Silence Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"silence": {
"enabled": true,
"noise_db": -60,
"duration_sec": 1
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
duration_sec | number | Silence must exceed this duration for triggering detection. maximum: 3600 default: 10 |
noise_db | number | The audio level must be above this value for being detected as a true audio signal. minimum: -90 default: -60 |
Channel_compare
analyze.deep_properties.audio.channel_compare
Example Channel_compare Object
{
"uid": "dual_mono_detect",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": {
"channel_compare": {
"results_file": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"file_pattern": "{source_basename}_dual_mono_detect.txt"
},
"enabled": true,
"sensitivity": 5,
"correlation_threshold": 0.6,
"min_duration_sec": 1
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
results_file | object | The location for the output channel compare information. |
sensitivity | number | Set value that define how many spikes of miscorrelation can be skipped. default: 5 |
correlation_threshold | number | Set correlation threshold value. maximum: 1 default: 0.8 |
min_duration_sec | number | Minimum duration of similar part of signals. When set '-1.0' minimum duration set to the length of channels. minimum: -1 default: 5 |
Results_file
analyze.deep_properties.audio.channel_compare.results_file
Example Results_file Object
{
"uid": "dual_mono_detect",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": {
"channel_compare": {
"results_file": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"file_pattern": "{source_basename}_dual_mono_detect.txt"
},
"enabled": true,
"sensitivity": 5,
"correlation_threshold": 0.6,
"min_duration_sec": 1
}
}
}
}
}
Name | Type | Description |
---|---|---|
location | object | The location for the output result of comparison information. |
file_pattern | string | The file pattern for the output result of comparison information. |
Phase_mismatch
analyze.deep_properties.audio.phase_mismatch
Example Phase_mismatch Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_comparison_master.mp4"
}
},
"deep_properties": {
"audio": [
{
"phase_mismatch": {
"enabled": true,
"sensitivity": 4,
"algorithm": "fast",
"min_duration_sec": 5
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
results_file | object | The location for the output channel compare information. |
min_duration_sec | number | Minimum phase mismatch duration. When set '-1.0' minimum duration set to the length of channels. minimum: -1 default: -1 |
sensitivity | number | Skipping n number of phase spikes default: 1 |
algorithm | enum fast precise |
Type of phase detection algo('fast'(default), 'precise') default: fast |
mode | enum all reversal |
'Reversal' mode, shows all phase mismatches near 180; 'All'(default) mode, shows all phase mismatches greater then 60 degree; default: all |
Results_file
analyze.deep_properties.audio.phase_mismatch.results_file
Example Results_file Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_comparison_master.mp4"
}
},
"deep_properties": {
"audio": [
{
"phase_mismatch": {
"enabled": true,
"sensitivity": 4,
"algorithm": "fast",
"min_duration_sec": 5,
"results_file": {
"file_pattern": "{source_basename}_phase_mismatch_results.txt",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket"
}
}
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
location | object | The location for the output result of comparison information. |
file_pattern | string | The file pattern for the output result of comparison information. |
Psnr
analyze.deep_properties.audio.psnr
Example Psnr Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_comparison_master.mp4"
}
},
"deep_properties": {
"audio": [
{
"psnr": {
"enabled": true
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type does not exist in the source. |
Emergency_alert
analyze.deep_properties.audio.emergency_alert
Example Emergency_alert Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"options": {
"pipeline": {
"analyzer_version": "hybrik_4.2"
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"emergency_alert": {
"enabled": true,
"is_optional": false
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
Video
Example Video Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": {
"volume": {
"enabled": true
}
},
"video": {
"black": {
"duration_sec": 5,
"enabled": true
},
"black_borders": {
"black_level": 0.08,
"enabled": true
},
"interlacing": {
"enabled": true
},
"levels": {
"chroma_levels": true,
"histograms": true,
"enabled": true
}
}
}
}
}
analyze.deep_properties.video
Name | Type | Description |
---|---|---|
track_selector | object | Mechanism to select a specific video track for analysis. |
settings | object | Settings for the comparison file, such as filters to be applied prior to comparison. |
black | object | Detect segments with black video. |
black_borders | object | Detect cropping, such as letter- or pillarboxes. |
interlacing | object | Detect interlacing properties of the video by scanning frames. |
duplicate_frames | object | Detect duplicate frames in the video by scanning frames. |
levels | object | Analyze the video and detect min/max Y,Cb,Cr etc. |
blockiness | object | Detect compression block artifacts. |
hdr_stats | object | Detect HDR signal levels. |
complexity | object | Produces a measurement for how complex the content is over time. |
content_variance | object | Produces a measurement for how much the content is changing over time. |
scene_change_score | object | Detects scene changes probabilities. |
pse | object | Detect Photo-Sensitive Epilepsy (PSE) artifacts. |
compressed_stats | object | Determines compressed frame sizes etc. |
ssim | object | Determine the SSIM value between an asset and a reference file. |
psnr | object | Determine the PSNR value between an asset and a reference file. |
vmaf | object | Uses the Netflix Video Multi-Method Assessment Fusion (VMAF) methods to assess the quality of an asset compared with a reference file. |
Track_selector
analyze.deep_properties.video.track_selector
Example Track_selector Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"track_selector": {
"pid": 380
},
"black": {
"duration_sec": 5,
"enabled": true
}
}
}
}
}
Name | Type | Description |
---|---|---|
id | integer | Track selector by stream ID. |
pid | integer | Track selector by stream PID (transport streams only). |
index | integer | Track selector by stream index. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type does not exist in the source. |
Settings
analyze.deep_properties.video.settings
Example Settings Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"levels": {
"enabled": true
},
"settings": {
"comparative": {
"width": 1280,
"height": 720,
"compare_filters": [
{
"kind": "crop",
"payload": {
"top": 10,
"bottom": 10
}
}
]
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
comparative | object | Operations to be applied to the comparative file before the comparison is executed. For video filter settings, please see the transcoding section. |
Comparative
analyze.deep_properties.video.settings.comparative
Example Comparative Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"settings": {
"size_selector": "compare_asset",
"compare_filters": [
{
"kind": "image_overlay",
"payload": {
"top": 10,
"bottom": 10
}
}
],
"ssim": {
"enabled": true
}
}
}
}
}
Name | Type | Description |
---|---|---|
size_selector | enum main_asset compare_asset reference_asset config |
When comparing two files, select which file's size (width x height) will be used as the reference size. Choose config to set custom scaling. |
width | integer | Sets the custom width that both files will be scaled to prior to comparison. |
height | integer | Sets the custom height that both files will be scaled to prior to comparison. |
scaler | object | The type of function to be used in scaling operations. |
chroma_format_selector | enum main_asset compare_asset reference_asset config |
Select which file's chroma format will be used as the reference. Choose config to set custom values. |
chroma_format | enum yuv411p yuv420p yuv422p yuv420p10le yuv422p10le yuv444p10le yuva444p10le yuv420p12le yuv422p12le yuv444p12le yuva444p12le yuv420p16le yuv422p16le yuv444p16le yuva444p16le yuvj420p yuvj422p rgb24 rgb48be rgba64be rgb48le rgba64le gbrp10le gbrp10be gbrp12le gbrp12be |
Available chroma formats. |
compare_filters | array | Filters to be applied to the reference file prior to the file comparison. See the Transcode section for filter definitions. |
Scaler
analyze.deep_properties.video.settings.comparative.scaler
Example Scaler Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "{{input_path}}/{{input_file}}"
}
},
"deep_properties": {
"video": {
"settings": {
"comparative": {
"size_selector": "reference_asset",
"scaler": {
"algorithm": "bicubic"
}
}
},
"vmaf": {
"enabled": true,
"results_file": {
"file_pattern": "{source_basename}_VMAF_bicubic_results_analysis.json",
"location": {
"storage_provider": "s3",
"path": "{{output_path}}"
}
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
kind | enum default zscale |
The type of scaling to be applied. default: default |
algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for scaling operations. This will apply to both up-scale and down-scale. These may be set separately using the upscale_algorithm and downscale_algorithm parameters. |
upscale_algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for up-scaling operations. |
downscale_algorithm | enum bilinear bicubic nearest_neighbor lanczos bicubic_spline spline16 spline36 sinc |
The algorithm to be used for down-scaling operations. |
config_string | string | The configuration string to be used with the specified scaling function. |
apply_always | boolean | Always use the specified scaling function. |
Black
analyze.deep_properties.video.black
Example Black Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"black": {
"enabled": true,
"black_level": 0.03,
"duration_sec": 1
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type does not exist in the source. |
duration_sec | number | Black video must exceed this duration for triggering detection. maximum: 3600 default: 10 |
black_level | number | The video signal level must be above this value for being detected as non-black. maximum: 1 default: 0.1 |
black_pixel_ratio | number | Ratio of black vs.non black pixels so that a picture is classified as black. 1.0: all pixels must be black, 0.0: no pixels must be black. maximum: 1 default: 0.98 |
ire_range_mode | enum auto full limited |
Determines if the analyzer shall use the assumed IRE ranges from the file (auto), full (0..255 for 8 bit) or limited (16..235 for 8 bit). default: auto |
Black_borders
analyze.deep_properties.video.black_borders
Example Black_borders Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"black_borders": {
"enabled": true
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type does not exist in the source. |
black_level | number | The video signal level must be above this value for being detected as non-black. maximum: 1 default: 0.1 |
max_outliers | integer | Number of non-black lines (counted from the edge) to be ignored when scanning for black borders. |
Interlacing
analyze.deep_properties.video.interlacing
Example Interlacing Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"interlacing": {
"enabled": true
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type does not exist in the source. |
Duplicate_frames
analyze.deep_properties.video.duplicate_frames
Example Duplicate_frames Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": [
{
"duplicate_frames": {
"results_file": {
"location": {
"storage_provider": "s3",
"path": "{{destination_location}}"
},
"file_pattern": "duplicate_frames.txt"
},
"enabled": true,
"max": 0,
"hi": 768,
"lo": 320,
"frac": 0.33
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
max | integer | Set the maximum number of consecutive duplicate frames which can be detected (if positive), or the minimum interval between detected frames (if negative). If the value is 0, the frame is detected disregarding the number of previous sequentially detected duplicate frames. |
hi | integer | High detection threshold default: 768 |
lo | integer | Low detection threshold default: 320 |
frac | number | Set the fraction of the frame size which is used for the detection. default: 0.33 |
is_optional | boolean | If set to true, the analyzer will not fail if this media type does not exist in the source. |
results_file | object | The location for the output duplicate frames data. |
Results_file
analyze.deep_properties.video.duplicate_frames.results_file
Example Results_file Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": [
{
"duplicate_frames": {
"results_file": {
"location": {
"storage_provider": "s3",
"path": "{{destination_location}}"
},
"file_pattern": "duplicate_frames.txt"
},
"enabled": true,
"max": 0,
"hi": 768,
"lo": 320,
"frac": 0.33
}
}
]
}
}
}
Name | Type | Description |
---|---|---|
location | object | The file location for the duplicate frames data. |
file_pattern | string | The file pattern for the duplicate frames data. |
Levels
analyze.deep_properties.video.levels
Example Levels Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"levels": {
"chroma_levels": true,
"histograms": true,
"enabled": true
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
st2084_levels | boolean | Report ST.2084 levels. |
chroma_levels | boolean | Report chroma levels. |
histograms | boolean | Produce level histograms. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type does not exist in the source. |
Blockiness
analyze.deep_properties.video.blockiness
Example Blockiness Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"blockiness": {
"enabled": true,
"algorithm": "gbim"
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
algorithm | enum gbim npbm hybm |
GBIM and NPBM are standard algorithms for blockiness analysis; HYBM is a Hybrik proprietary algorithm, less sensitive to false positives. |
violation_frame_window | number |
Sets the minimum number of frames where the threshold is exceeded to count as a violation. Default is 1. |
violation_threshold | number |
Sets the blockiness threshold to trigger a violation. |
max_report_violations | integer |
Sets the maximum number of violations to be reported. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
Hdr_stats
analyze.deep_properties.video.hdr_stats
Example Hdr_stats Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"hdr_stats": {
"enabled": true,
"ire_range_mode": "limited",
"transfer_function": "hlg"
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
transfer_function | enum pq hlg |
Select between PQ1000 (pq) and HLG (hlg). |
ire_range_mode | enum auto full limited |
Determines if the analyzer shall use the assumed IRE ranges from the file (auto), full (0..255 for 8 bit) or limited (16..235 for 8 bit). default: auto |
max_luminance | number | Set an assumed max luminance, range 0..1. |
min_luminance | number | Set an assumed minimum luminance, range 0..1. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
Complexity
analyze.deep_properties.video.complexity
Example Complexity Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"complexity": {
"enabled": true,
"analysis_width": 426,
"analysis_height": 240
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
analysis_width | integer | Scales the content to this width before calculating the complexity. |
analysis_height | integer | Scales the content to this height before calculating the complexity. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
Content_variance
analyze.deep_properties.video.content_variance
Example Content_variance Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"content_variance": {
"enabled": true
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
freeze_frames | object | Object containing parameters for freeze frame identification. |
Freeze_frames
analyze.deep_properties.video.content_variance.freeze_frames
Example Freeze_frames Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"content_variance": {
"enabled": true,
"freeze_frames": {
"threshold": .01,
"min_duration_sec": 1
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
threshold | number | The threshold value for determining frame similarity. |
min_duration_sec | number | The minimum duration (in seconds) for a frame to be considered a freeze frame. |
Scene_change_score
analyze.deep_properties.video.scene_change_score
Example Scene_change_score Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"scene_change_score": {
"enabled": true,
"cutlist_threshold": 0.4
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
thresholds | array | An array of thresholds (range 0.1 to 1.0) to trigger a scene change detection. |
cutlist_threshold | number | Sets scene change threshold for cutlist generation (range 0.3 to 1.0). minimum: 0.3 maximum: 1 |
is_optional | boolean |
If the track is not present, do not generate an error. |
PSE
Hybrik supports the analysis of video for Photo-Sensitive Epilepsy triggers. The PSE analysis looks for video effects that might trigger epileptic seizures in people who are sensitive to particular visual stimuli. The test identifies luminance flashes and pattersn that exceed a prescribed amplitude and frequency limit. The Hybrik PSE analysis is approved by the Digital Production Partnership for use with UK broadcast content.
NOTE: This analysis is only valid for SDR content. Although Hybrik does not currently fail with non-SDR content, any results using such content are neither meaningful nor certified.
analyze.deep_properties.video.pse
Example PSE Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"pse": {
"enabled": true
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
Compressed_stats
analyze.deep_properties.video.compressed_stats
Example Compressed_stats Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"compressed_stats": {
"enabled": true,
"window_size": 10
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
window_size | number | The window size in seconds to be used for the analysis. |
is_optional | boolean |
If the track is not present, do not generate an error. |
Ssim
analyze.deep_properties.video.ssim
Example Ssim Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"ssim": {
"enabled": true
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
results_file | object | The location for the output SSIM analysis data. |
Results_file
analyze.deep_properties.video.ssim.results_file
Example Results_file Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"ssim": {
"enabled": true,
"results_file": {
"file_pattern": "{source_basename}_ssim_analysis.txt",
"location": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_folder"
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
location | object | The location for the output SSIM analysis data. |
file_pattern | string | The file pattern for the SSIM analysis data. |
Psnr
analyze.deep_properties.video.psnr
Example Psnr Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"settings": {
"comparative": {
"size_selector": "main_asset"
}
},
"psnr": {
"enabled": true
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
results_file | object | The location for the output PSNR analysis data. |
Results_file
analyze.deep_properties.video.psnr.results_file
Example Results_file Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
},
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"settings": {
"comparative": {
"size_selector": "main_asset"
}
},
"psnr": {
"enabled": true,
"results_file": {
"file_pattern": "{source_basename}_psnr_analysis.txt",
"location": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_folder"
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
location | object | The file location for the PSNR analysis data. |
file_pattern | string | The file pattern for the PSNR analysis data. |
Vmaf
analyze.deep_properties.video.vmaf
Example Vmaf Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
},
"deep_properties": {
"video": {
"settings": {
"comparative": {
"size_selector": "config",
"width": 1280,
"height": 720
}
},
"vmaf": {
"enabled": true,
"model": "4k"
}
}
}
}
Name | Type | Description |
---|---|---|
enabled | boolean | Enables this analysis type. |
component_version | enum default 1.0 2.0 2.2 |
Choose the VMAF plugin version for the analysis task. default: default |
use_phone_model | boolean | Use VMAF phone model. |
model | enum 1080p 4k phone |
Choose the VMAF model for the analysis task. |
threads | integer | Manually specify number of VMAF threads to use. |
results_file | object | The location for the output VMAF analysis data. |
is_optional | boolean | If set to true, the analyzer will not fail if this media type did not exist in the source. |
Results_file
analyze.deep_properties.video.vmaf.results_file
Example Results_file Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"compare_asset": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_reference_file.mov"
}
}
},
"deep_properties": {
"video": {
"settings": {
"comparative": {
"size_selector": "config",
"width": 1280,
"height": 720
}
},
"vmaf": {
"enabled": true,
"results_file": {
"file_pattern": "{source_basename}_vmaf_analysis.txt",
"location": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_folder"
}
}
}
}
}
}
Name | Type | Description |
---|---|---|
location | object | The location for the output VMAF analysis data. |
file_pattern | string | The file pattern for the output VMAF analysis data. |
Reports
Example Reports Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"reports": [
{
"create_condition": "always",
"file_pattern": "{source_basename}_report.json",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_reports_folder"
}
}
],
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"volume": {
"enabled": true
},
"levels": {
"enabled": true
}
}
],
"video": {
"black_borders": {
"black_level": 0.08,
"enabled": true
},
"levels": {
"chroma_levels": true,
"histograms": true,
"enabled": true
}
}
}
}
}
analyze.reports
Name | Type | Description |
---|---|---|
create_condition | enum always on_failure on_success |
Determines whether to create a QC report when the QC task succeeds, fails, or always. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
location | object | Output location for the analysis report |
temp_location | object | Output location for temporary files (thumbnails) for the QC report |
options | object |
Options
Example Options Object
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"reports": [
{
"create_condition": "always",
"file_pattern": "{source_basename}_report.json",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_reports_folder"
},
"options": {
"report_version": "v3.0",
"display_paths": false,
"time_unit_display": "time_code"
}
}
],
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": [
{
"volume": {
"enabled": true
},
"levels": {
"enabled": true
}
}
],
"video": {
"black_borders": {
"black_level": 0.08,
"enabled": true
},
"levels": {
"chroma_levels": true,
"histograms": true,
"enabled": true
}
}
}
}
}
analyze.reports.options
Name | Type | Description |
---|---|---|
report_version | enum v1.0 v2.0 v3.0 |
|
doc_enum | ** | |
ui_exposure | ** | |
use_timecode | boolean | |
timecode_kind | enum timecode_auto timecode_drop timecode_nodrop frame_nr media_time |
Choose the time/timecode format. If timecode_auto is used, drop/non-drop is chosen based on the frame rate. default: timecode_auto |
timecode_frame_rate | string number |
|
display_paths | boolean | Show the full file path in the PDF report. default: true |
time_unit_display | enum time time_code frames |
Choose the time/time_code/frames format in general and deep properties. default: time |
Overrides
Example Overrides Object
{
"uid": "analyze_task",
"kind": "analyze",
"task": {
"retry_method": "fail"
},
"payload": {
"general_properties": {
"enabled": true
},
"overrides": {
"warp_mode": "normal"
},
"deep_properties": {
"audio": [
{
"dplc": {
"enabled": true,
"regulation_type": "atsc_a85_agile"
}
}
]
}
}
}
analyze.overrides
Name | Type | Description |
---|---|---|
warp_mode | enum normal warping loro pl2x |
Change the warp_mode option only for analysis purposes. This option won't override warp mode in the file. |
QC Task
QC
Example QC Object
{
"name": "Hybrik QC Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_folder/my_file.mp4"
}
}
},
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"audio": {
"volume": {
"enabled": true
},
"levels": {
"enabled": true
}
},
"video": {
"black": {
"duration_sec": 5,
"enabled": true
},
"black_borders": {
"black_level": 0.08,
"enabled": true
},
"interlacing": {
"enabled": true
},
"levels": {
"chroma_levels": true,
"histograms": true,
"enabled": true
}
}
}
}
},
{
"uid": "qc_task",
"kind": "qc",
"payload": {
"report": {
"file_pattern": "{source_basename}_qc_report.pdf",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder"
}
},
"conditions": {
"pass": [
{
"condition": "source.video.width >= 1920",
"message_pass": "Video is HD; actual value = {source.video.width}",
"message_fail": "Video is not HD; actual value = {source.video.width}"
}
],
"warn": [
{
"condition": "(abs(source.video.bitrate_kb - 50000)<5000)",
"message_fail": "Bitrate: WARNING video bitrate = 50Mbps; actual value = {source.video.bitrate_kb}"
}
]
}
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "analyze_task"
}
]
}
},
{
"from": [
{
"element": "analyze_task"
}
],
"to": {
"success": [
{
"element": "qc_task"
}
]
}
}
]
}
}
In Hybrik, the QC Task must be paired with a preceding Analyze Task. The Analyze Task will analyze the selected file and create a collection of metadata about the file. The QC Task sets pass, fail, and warn conditions based on the metadata collected by the Analyze Task. The conditions can use any parameter value generated by the Analyze task. Each condition can have a unique pass/fail message. The pass, fail, and warn settings are arrays of conditions. A pass condition will pass if the condition is true. A fail condition will generate a failure if the condition is true. For more details on using the QC task, see our tutorials.
qc
Name | Type | Description |
---|---|---|
condition | string | This string gets evaluated to true or false. Standard mathematical and logical operators can be part of the condition. Additionally, functions from the math.js Javascript library may be used. |
message_pass | string | The string to be reported when the condition evaluates as true. Note that when the condition is being used in a "fail" array, a "message_pass" actually means that the fail condition has been met. |
message_fail | string | The string to be reported when the condition evaluates as false. Note that when the condition is being used in a "fail" array, a "message_fail" actually means that the fail condition has not been met. |
Notify Task
Notify
Example Notify Object
{
"name": "Hybrik Transcode Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264",
"profile": "high",
"level": "4.0",
"frame_rate": 23.976
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 128
}
]
}
]
}
},
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "email",
"email": {
"recipients": "{account_owner_email}",
"subject": "Job {job_id} has completed.",
"body": "File {source_basename} was processed."
}
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
},
{
"from": [
{
"element": "transcode_task"
}
],
"to": {
"success": [
{
"element": "notify_task"
}
]
}
}
]
}
}
In a Hybrik Job, you may want to trigger some external event or notify an individual when something particular happens. For example, you may want a failed job to email someone in your QA department to notify them. This can be accomplished by adding a Notification Task to your workflow. Note that you can have multiple Notification Task within your workflow triggering at different points. The types of notifications include REST calls, emails, and Amazon AWS Simple Notification Service (SNS) and or Simple Queue Service (SQS) messaging. In addition to Notification Tasks, Hybrik also supports message Subscriptions. A Notification sends a one-time message at a specific point in the Workflow. A Subscription can provide on-going status reports (such as progress) on a Job. Please see the Hybrik Tutorial on Subscriptions for more information.
notify
Name | Type | Description |
---|---|---|
notify_method | enum email rest sns sqs |
Notification method for messaging. default: rest |
object | Notification by email: emailaddress. | |
rest | object | Notifications to a REST API. |
sns | object | Notification with AWS Simple Notification Service. https://aws.amazon.com/sns/ |
sqs | object | Notification with AWS Simple Queue Service. https://aws.amazon.com/sqs/ |
Example Email Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "email",
"email": {
"recipients": "{account_owner_email}",
"subject": "Job {job_id} has completed.",
"body": "File {source_basename} was processed."
}
}
}
notify.email
Name | Type | Description |
---|---|---|
recipients | string |
Email recipients, comma separated. Use {account_owner_email} to send an email to the Hybrik account owner. |
subject | string |
Email subject line. Placeholders such as {source_name} and {job_id} are supported. |
body | string |
Email body text. Placeholders such as {source_name} and {job_id} are supported. |
Rest
Example Rest Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "rest",
"rest": {
"url": "http://my_rest_entry1.example.com/1057729/complete",
"method": "PUT"
}
}
}
notify.rest
Name | Type | Description |
---|---|---|
url | string |
The URL for the REST call. Should be in complete format such as "http://my_rest_entry.example.com" default: http:// |
proxy | object | Optional REST proxy URL, if a proxy server is used. |
method | enum GET PUT POST DELETE |
HTTP method to use. |
Proxy
Example Proxy Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "rest",
"rest": {
"url": "http://my_rest_entry1.example.com/1057729/complete",
"method": "PUT",
"proxy": {
"url": "http://www.proxy.example.com"
}
}
}
}
notify.rest.proxy
Name | Type | Description |
---|---|---|
url | string |
Proxy URL address for REST notifications. |
Sns
Example Sns Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "sns",
"sns": {
"topic": "arn:aws:sns:us-east-1:34534534353453:my-sns",
"user_payload": "my SNS notification: {job_id}"
}
}
}
notify.sns
Name | Type | Description |
---|---|---|
topic | string | Amazon Web Services SNS topic. |
aws_access | oneOf basic_aws_credentials computing_group_ref credentials_vault_ref |
Amazon Web Services credentials to use for SNS messaging. |
user_payload | string |
Payload to attach to the SNS message. Placeholders may be used like {source_name} and {job_id}. |
Basic_aws_credentials
Example Basic_aws_credentials Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "sns",
"sns": {
"topic": "arn:aws:sns:us-east-1:34534534353453:my-sns",
"user_payload": "my SNS notification: {job_id}",
"aws_access": {
"shared_key": "XXXXXXXXXXXXXXXXXXX",
"secret_key": "12345678901234567890"
}
}
}
}
notify.sns.basic_aws_credentials
Name | Type | Description |
---|---|---|
shared_key | string |
The AWS Key. |
secret_key | string |
The AWS Secret. |
session_token | string |
The AWS Session Token. |
region | string |
The AWS region (optional). |
max_cross_region_mb | integer | This sets the maximum amount of data (in MB) that can be transferred across regions. This is to avoid excessive inter-region transfer costs. Set this to -1 for unlimited transfers. |
Computing_group_ref
Example Computing_group_ref Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "sns",
"sns": {
"topic": "arn:aws:sns:us-east-1:34534534353453:my-sns",
"user_payload": "my SNS notification: {job_id}",
"aws_access": {
"computing_group_id": "495292",
}
}
}
}
notify.sns.computing_group_ref
Name | Type | Description |
---|---|---|
computing_group_id | string |
Use the AWS credentials from the specified Computing Group ID |
max_cross_region_mb | integer | This sets the maximum amount of data (in MB) that can be transferred across regions. This is to avoid excessive inter-region transfer costs. Set this to -1 for unlimited transfers. |
Credentials_vault_ref
Example Credentials_vault_ref Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "sns",
"sns": {
"topic": "arn:aws:sns:us-east-1:34534534353453:my-sns",
"user_payload": "my SNS notification: {job_id}",
"aws_access": {
"credentials_key": "my_AWS_vault_key",
}
}
}
}
notify.sns.credentials_vault_ref
Name | Type | Description |
---|---|---|
credentials_key | string |
Use API Key to reference credentials inside the Hybrik Credentials Vault. |
region | string |
AWS region, optional |
max_cross_region_mb | integer | This sets the maximum amount of data (in MB) that can be transferred across regions. This is to avoid excessive inter-region transfer costs. Set this to -1 for unlimited transfers. |
Sqs
Example Sqs Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "sqs",
"sqs": {
"name": "my_sqs_queue",
"user_payload": "Job {job_id} has completed.",
"aws_access": {
"credentials_key": "my_aws_vault_key"
}
}
}
}
notify.sqs
Name | Type | Description |
---|---|---|
name | string | Amazon Web Services SQS Queue name. |
group_id | string | Amazon Web Services SQS MessageGroupId. |
provide_deduplication | boolean | Send a deduplication ID. |
aws_access | oneOf basic_aws_credentials computing_group_ref credentials_vault_ref |
Amazon Web Services credentials to use for SNS messaging. |
Basic_aws_credentials
Example Basic_aws_credentials Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "sqs",
"sqs": {
"name": "my_sqs_queue",
"user_payload": "Job {job_id} has completed.",
"aws_access": {
"shared_key": "AWERDFSDFSAEFAWERF",
"secret_key": "2345rsdfswrqw4rw4redfljq34r23rf1223as"
}
}
}
}
notify.sqs.basic_aws_credentials
Name | Type | Description |
---|---|---|
shared_key | string |
The AWS Key. |
secret_key | string |
The AWS Secret. |
session_token | string |
The AWS Session Token. |
region | string |
The AWS region (optional). |
max_cross_region_mb | integer | This sets the maximum amount of data (in MB) that can be transferred across regions. This is to avoid excessive inter-region transfer costs. Set this to -1 for unlimited transfers. |
Computing_group_ref
Example Computing_group_ref Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "sqs",
"sqs": {
"name": "my_sqs_queue",
"user_payload": "Job {job_id} has completed.",
"aws_access": {
"computing_group_id": "My Main Group"
}
}
}
}
notify.sqs.computing_group_ref
Name | Type | Description |
---|---|---|
computing_group_id | string |
Use the AWS credentials from the specified Computing Group ID |
max_cross_region_mb | integer | This sets the maximum amount of data (in MB) that can be transferred across regions. This is to avoid excessive inter-region transfer costs. Set this to -1 for unlimited transfers. |
Credentials_vault_ref
Example Credentials_vault_ref Object
{
"uid": "notify_task",
"kind": "notify",
"payload": {
"notify_method": "sqs",
"sqs": {
"name": "my_sqs_queue",
"user_payload": "Job {job_id} has completed.",
"aws_access": {
"credentials_key": "my_aws_vault_key"
}
}
}
}
notify.sqs.credentials_vault_ref
Name | Type | Description |
---|---|---|
credentials_key | string |
Use API Key to reference credentials inside the Hybrik Credentials Vault. |
region | string |
AWS region, optional |
max_cross_region_mb | integer | This sets the maximum amount of data (in MB) that can be transferred across regions. This is to avoid excessive inter-region transfer costs. Set this to -1 for unlimited transfers. |
Copy Task
Copy
Example Copy Object
{
"name": "Hybrik Copy Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_source_bucket/my_source_folder/my_file.mp4"
}
}
},
{
"uid": "copy_task",
"kind": "copy",
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "s3://my_destination_bucket/my_destination_folder"
},
"existing_files": "replace",
"file_pattern": "{source_name}"
}
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "copy_task"
}
]
}
}
]
}
}
A common requirement in a Workflow is to move data from one place to another. After a transcode, for example, you may want to send the result to a client. Or you may want to move a file from one AWS S3 bucket to another. These can all be accomplished with a Copy Task. There a different types of transfers that are supported, including S3, HTTP, FTP, SFTP, etc. When specifying a Copy, you can specify what to do if there already exists a file with that name in the new location. You can also specify what to do with the original file once you have successfully copied it to the new location.
copy
Name | Type | Description |
---|---|---|
target | object | Information about the target, including location, file naming, and method for handling existing files. |
options | object | Options for the copy operation, including control over source deletion and error handling. |
Target
Example Target Object
{
"uid": "copy_task",
"kind": "copy",
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "s3://my_destination_bucket/my_destination_folder"
},
"existing_files": "delete_and_replace"
}
}
}
copy.target
Name | Type | Description |
---|---|---|
location | object | All sources will be copied to this location. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
existing_files | enum delete_and_replace replace replace_late rename_new rename_org fail |
The desired behavior when a target file already exists. "replace": will delete the original file and write the new one. "rename_new": gives the new file a different auto-generated name. "rename_org": renames the original file. Note that renaming the original file may not be possible depending on the target location. "delete_and_replace": attempts to immediately delete the original file. This will allow for fast failure in the case of inadequate permissions. "replace_late": does not attempt to delete the original -- simply executes a write. default: fail |
Options
Example Options Object
{
"uid": "copy_task",
"kind": "copy",
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "s3://my_destination_bucket/my_destination_folder"
},
"existing_files": "delete_and_replace"
},
"options": {
"delete_source_on_completion": true
}
}
}
copy.options
Name | Type | Description |
---|---|---|
delete_sources | boolean |
Delete the task's source files upon successful completion of the task. |
delete_source_on_completion | boolean | Will delete source on successful copy. |
throttle_byte_per_sec | integer string |
Throttle the copy operation. Useful, for example, for operation with SwiftStack storage on shared network lines. |
restore_from_glacier | boolean | If a file is on s3 with the storage class 'GLACIER', issue a restore and wait for arrival before copying. |
multi_file_concurrency | integer | If multiple files are to be copied, limit the concurrency of the copy operation. |
multi_source_mode | enum use_first use_all concatenate |
If multiple tasks feed into a single copy task: "use_all": Copy from all incoming tasks to the output location (default). "concatenate": Concatenate the outputs from all incoming tasks into a single output file. "use_first": Only copy output from first task, but wait for all incoming tasks to complete before proceeding in the workflow. |
Delete Asset Task
Delete_asset
Example Delete_asset Object
{
"name": "Hybrik Delete Asset Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://bucket_name/file_name.mov"
}
}
},
{
"uid": "delete_task",
"kind": "delete_asset",
"payload": {
"asset_selector": "workflow_document"
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "delete_task"
}
]
}
}
]
}
}
In addition to moving data between locations, Hybrik also supports deleting assets. This can be useful when you need to explicitly clean up temporary file locations. Using a separate task for deleting files can be helpful in workflows where each individual task is unable to delete intermediate files because of their potential use by another workflow taks. An example of this might be when there are multiple packaging tasks all using the same intermediate transcode files. None of the package tasks can remove the intermediates, since they are needed by the other package tasks. So, a delete_asset task can be used after all package tasks are complete. Note that if a folder is specified to be deleted, this will delete all sub-folders recursively. In order to delete a folder, there must be an explicit re-verification of the target folder to ensure there is no accidental deletion.
delete_asset
Name | Type | Description |
---|---|---|
asset_selector | enum workflow_document config |
Selecting the "workflow_document" will delete the outputs of the prior task. Selecting "config" allows you to specify explictly which assets you wish to delete. |
asset | object | Object defining the parameters of an asset, including location, trim points, etc. [DESC] |
location | object | |
delete_folder_acknowledgement | string | If deleting a folder, you need to specify the exact name of the folder here. |
ignore_errors | boolean | If set to "true", then any errors while attempting to delete assets will not cause the task and job to fail. Examples of potential errors include files not present or insufficient permissions. |
Asset
Example Asset Object
{
"uid": "delete_task",
"kind": "delete_asset",
"payload": {
"asset_selector": "workflow_document"
}
}
delete_asset.asset
Name | Type | Description |
---|---|---|
kind | enum asset_url asset_urls asset_complex |
The type of asset. asset_url is a single asset, asset_urls is an array of assets, and asset_complex is an asset assembled from multiple components. |
payload | anyOf asset_url asset_urls |
The asset description payload. |
Package Task
Package
Example Package Object
"name": "Hybrik Package Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_folder/my_file.mp4"
}
}
},
{
"uid": "transcode_all_renditions",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/mp4s"
},
"targets": [
{
"file_pattern": "{source_basename}_800kbps{default_extension}",
"existing_files": "replace",
"container": {
"kind": "fmp4",
"segment_duration_sec": 6
},
"video": {
"codec": "h264",
"bitrate_mode": "cbr",
"use_scene_detection": false,
"bitrate_kb": 800,
"height": 486
}
},
{
"file_pattern": "{source_basename}_400kbps{default_extension}",
"existing_files": "replace",
"container": {
"kind": "fmp4",
"segment_duration_sec": 6
},
"video": {
"codec": "h264",
"bitrate_mode": "cbr",
"use_scene_detection": false,
"bitrate_kb": 400,
"height": 360
}
},
{
"file_pattern": "{source_basename}_200kbps{default_extension}",
"existing_files": "replace",
"container": {
"kind": "fmp4",
"segment_duration_sec": 6
},
"video": {
"codec": "h264",
"bitrate_mode": "cbr",
"use_scene_detection": false,
"bitrate_kb": 200,
"height": 252
}
},
{
"file_pattern": "{source_basename}_audio_64kbps{default_extension}",
"existing_files": "replace",
"container": {
"kind": "fmp4",
"segment_duration_sec": 6
},
"audio": [
{
"channels": 2,
"codec": "aac_lc",
"sample_rate": 48000,
"bitrate_kb": 96
}
]
}
]
}
},
{
"uid": "package_hls",
"kind": "package",
"payload": {
"kind": "hls",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests",
"attributes": [
{
"name": "ContentType",
"value": "application/x-mpegURL"
}
]
},
"file_pattern": "master_manifest.m3u8",
"segmentation_mode": "segmented_ts",
"segment_duration_sec": "6",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_media",
"attributes": [
{
"name": "ContentType",
"value": "video/MP2T"
}
]
},
"media_file_pattern": "{source_basename}.ts",
"hls": {
"media_playlist_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
}
}
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_all_renditions"
}
]
}
},
{
"from": [
{
"element": "transcode_all_renditions"
}
],
"to": {
"success": [
{
"element": "package_hls"
}
]
}
}
]
}
}
In addition to creating many different types of output files, Hybrik can also package these files into Adaptive Bitrate (ABR) formats like HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH). The Package Task takes the results from a Transcode Task and re-multiplexes the output into the required HLS or DASH formats, including creating the various required manifest files. These manifest files tell the player how to switch between the various bitrates in order to create smooth playback. The Package Task supports multiple audio tracks as well as subtitles in both HLS and DASH. You can also encrypt your ABR files in Hybrik.
package
Name | Type | Description |
---|---|---|
options | object | Packaging options including source deletion and validation steps. |
location | object | This will override any location defined within the parent of this manifest. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
force_original_media | boolean | Use the original transcoded files rather than remuxing them in the Package task to create the HLS/DASH outputs. Requires that the original files have correct segmentation. |
kind | enum hls dash smooth |
The kind of package to create. Options are: "dash", "hls", or "smooth". |
dash | object | MPEG-DASH specific settings, only valid if kind is "dash". |
smooth | object | Smooth Streaming specific settings, only valid if kind is "smooth". |
hls | object | HLS specific settings, only valid if kind is "hls". |
encryption | object | DRM and encryption settings for the produced media files. |
encryptions | array | Encryptions arrays, can be referenced by id |
segmentation_mode | enum segmented_ts single_ts multiplexed_ts fmp4 segmented_mp4 |
Type of segmentation for media files. |
segment_duration_sec | number | Desired duration of segments. Valid for segmented_ts, single_ts, multiplexed_ts, fmp4, segmented_mp4 modes. |
media_location | object | The location of media files, if they need processing. |
media_url_prefix | string | The URL prefix to be added to media locations. |
media_url_template | string | A media URL template. The actual media file must be referenced via {media_file} within. |
media_file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
media_file_extensions | object | The extensions to be used for each type of media in the output. |
init_file_pattern | string |
The file pattern for the segmented mp4 init file. |
title | string | An optional title. Note that not all multiplexers support adding a title. |
author | string | An optional author. Note that not all multiplexers support adding an author. |
copyright | string | An optional copyright string. Note that not all multiplexers support adding a copyright string. |
info_url | string | An optional info URL string. Note that not all multiplexers support adding a url. |
uid | string | This describes the manifest UID. Encodes to be included in this manifest creation must include this UID in their manifest_uids property. If no UID is specified here, then all encodes are included. |
closed_captions | array | An array of closed-caption references to be included in the manifest. |
Options
Example Options Object
{
"uid": "package_hls",
"kind": "package",
"payload": {
"kind": "hls",
"options": {
"delete_sources": true,
"skip_validation": true
},
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests",
"attributes": [
{
"name": "ContentType",
"value": "application/x-mpegURL"
}
]
},
"file_pattern": "master_manifest.m3u8",
"segmentation_mode": "segmented_ts",
"segment_duration_sec": "6",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_media",
"attributes": [
{
"name": "ContentType",
"value": "video/MP2T"
}
]
},
"media_file_pattern": "{source_basename}.ts",
"hls": {
"media_playlist_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
}
}
}
}
package.options
Name | Type | Description |
---|---|---|
delete_sources | boolean |
Delete the packaging source files (not the master sources) upon successful completion of the task. |
skip_validation | boolean |
Skip the validation step for the Package task. |
Dash
Example Dash Object
{
"uid": "dash_unencrypted_packaging",
"kind": "package",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder",
"attributes": [
{
"name": "ContentType",
"value": "application/dash+xml"
}
]
},
"file_pattern": "manifest.mpd",
"kind": "dash",
"uid": "main_manifest",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder",
"attributes": [
{
"name": "ContentType",
"value": "video/mp4"
}
]
},
"dash": {
"title": "some dash title",
"info_url": "www.dolby.com"
}
}
}
package.dash
Name | Type | Description |
---|---|---|
location | object | This will override any location defined within the parent of this manifest. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
compliance | enum generic hmmp senvu_2012 hmmpabs_2016 hmmpabs_2016_sdr hmmpabs_2017 |
Specifies the MPEG-DASH compliance settings to use. |
compliances | array enum generic hmmp senvu_2012 hmmpabs_2016 hmmpabs_2016_sdr hmmpabs_2017 |
An array specifying the MPEG-DASH compliance settings to use. |
base_url | string | MPEG-DASH MPD BaseURL. |
title | string | MPEG-DASH MPD title. |
copyright | string | MPEG-DASH MPD copyright. |
info_url | string | MPEG-DASH MPD info URL. |
remove_partial_mpd | boolean | Contrary to HLS, MPEG-DASH does not reference layer manifest files. Enable removing these potentially obsolete files. |
profile_urns | array | MPEG-DASH profile URN's. |
subtitle_container | enum webvtt fmp4 |
Specifies whether to use WebVTT or FMP4 as the container for subtitles. |
manifest_location | object | The location of the mpd's. |
manifest_file_pattern | string |
The file pattern of the mpd's. default: {source_basename} |
adaptation_sets | array | An array defining the Adaptation Sets included in the DASH manifest. Commonly, you will have one video Adaptaion Set and multiple audio Adaptation Sets. |
use_segment_list | boolean | Uses the segment list instead of segment templates. |
use_segment_timeline | boolean | Uses the segment timelines in segment templates. |
Adaptation_sets
Example Adaptation_sets Object
{
"uid": "dash_unencrypted_packaging",
"kind": "package",
"payload": {
"kind": "dash",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder",
"attributes": [
{
"name": "ContentType",
"value": "application/dash+xml"
}
]
},
"file_pattern": "manifest.mpd",
"uid": "main_manifest",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder",
"attributes": [
{
"name": "ContentType",
"value": "video/mp4"
}
]
},
"dash": {
"title": "some dash title",
"info_url": "www.dolby.com",
"adaptation_sets": [
{
"track_group_id": "1",
"id": 1,
"role": "main"
}
]
}
}
}
package.dash.adaptation_sets
Name | Type | Description |
---|---|---|
track_group_id | string | This indicates which Group this track belongs to. Multiple tracks with the same content but different bitrates would have the same track_group_id. |
id | integer | The ID for the Adaptation Set. |
role | string | The Role for the Adaptation Set. |
Smooth
Example Smooth Object
{
"uid": "smooth_encrypted_packaging",
"kind": "package",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder",
"attributes": [
{
"name": "ContentType",
"value": "application/vnd.ms-sstr+xml"
}
]
},
"file_pattern": "manifest.ism",
"kind": "smooth",
"uid": "main_manifest",
"force_original_media": false,
"encryption": {
"enabled": true,
"schema": "mpeg-cenc",
"drm": [
"playready"
],
"key_id": "XXXXXX51686b5e1ba222439ecec1f12a",
"key": "0000002cbf1a827e2fecfb87479a2",
"playready_url": "http://playready.directtaps.net/pr/svc/rightsmanager.asmx"
},
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_folder",
"attributes": [
{
"name": "ContentType",
"value": "video/mp4"
}
]
}
}
}
package.smooth
Name | Type | Description |
---|---|---|
location | object | This will override any location defined within the parent of this manifest. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
title | string | MPEG-DASH MPD title. |
copyright | string | MPEG-DASH MPD copyright. |
info_url | string | MPEG-DASH MPD info URL. |
profile_urns | array | MPEG-DASH profile URN's. |
manifest_location | object | The location of the ismc's. |
manifest_file_pattern | string |
The file pattern of the ismc's. default: {source_basename} |
server_manifest_location | object | The location of the ism's. |
server_manifest_file_pattern | string |
The file pattern of the ism's. default: {source_basename} |
Hls
Example Hls Object
{
"uid": "package_hls",
"kind": "package",
"payload": {
"kind": "hls",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests",
"attributes": [
{
"name": "ContentType",
"value": "application/x-mpegURL"
}
]
},
"file_pattern": "master_manifest.m3u8",
"segmentation_mode": "segmented_ts",
"segment_duration_sec": "6",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_media",
"attributes": [
{
"name": "ContentType",
"value": "video/MP2T"
}
]
},
"media_file_pattern": "{source_basename}.ts",
"hls": {
"media_playlist_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
},
"include_iframe_manifests": true,
"primary_layer_uid": "Layer4"
}
}
}
package.hls
Name | Type | Description |
---|---|---|
location | object | This will override any location defined within the parent of this manifest. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
version | enum 3 4 5 6 7 8 9 10 |
The HLS version for packaging. |
ietf_draft_version | integer | Attributes and tags newer than the version listed here may be omitted. |
primary_layer_uid | string |
If one of the included targets has a matching UID, it will be listed as the first layer in the hls. |
include_iframe_manifests | boolean | If the individual layers have i-frame/trick play manifests, include these in the master manifest. This requires HLS version 4 or greater. |
master_manifest_iframe_references_location | enum before_media after_media |
Position i-frame/trick play manifest references befopre or after the media manifest references in the master manifest. |
media_playlist_location | object | The location of media playlist m3u8's. |
media_playlist_url_prefix | string | The URL prefix to be added to media playlist locations. |
media_playlist_file_pattern | string |
The file pattern of the media playlist m3u8's. default: {source_basename} |
manifest_location | object | The location of the master m3u8's. |
manifest_file_pattern | string |
The file pattern of the master m3u8's. Example: {source_basename}_master_manifest.m3u8 default: {source_basename} |
align_to_av_media | boolean | For subtitles only, align duration and segmenting to A/V media time. |
Encryption
Example Encryption Object
{
"uid": "package_hls",
"kind": "package",
"payload": {
"kind": "hls",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests",
"attributes": [
{
"name": "ContentType",
"value": "application/x-mpegURL"
}
]
},
"file_pattern": "master_manifest.m3u8",
"segmentation_mode": "segmented_ts",
"segment_duration_sec": "6",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_media",
"attributes": [
{
"name": "ContentType",
"value": "video/MP2T"
}
]
},
"media_file_pattern": "{source_basename}.ts",
"hls": {
"media_playlist_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
}
},
"encryption": {
"enabled": true,
"schema": "mpeg-cenc",
"drm": [
"playready"
],
"key_id": "[32 char hex sequence]",
"key": "[32 char hex sequence]",
"content_id": "2a",
"playready_pssh": "[base-64 encoded pssh]...=="
}
}
}
package.encryption
Name | Type | Description |
---|---|---|
id | string | Encryption id, used for referencing encryptions |
enabled | boolean | Enable or disable encryption. |
schema | enum aes-128-cbc sample-aes mpeg-cenc mpeg-cbc1 mpeg-cens mpeg-cbcs none |
The chosen encryption schema. Encryption keys will be generated by Hybrik. default: aes-128-cbc |
drm | array enum playready fairplay widevine clearkey |
An array specifying the types of DRM that will be used. |
rotation | integer |
The encryption rotation interval. Every N file segments, a new encryption key will be generated. default: 12 |
external_key_file | boolean | Use the externally created key file with path from key_location. Do not store key file. |
key_location | object | The optional key location. This will override any location defined within the parent of this task. |
key_file_pattern | string |
This describes the key file name. Placeholders such as {source_basename} for source file name are supported. |
key | string | The actual key, if pre-supplied. |
iv | string | The initialization vector, if pre-supplied. |
key_id | string | The Key ID. Used for MPEG-CENC only. |
key_seed | string | The Key seed. |
content_id | string | The Content ID. Used for MPEG-CENC only. |
widevine_provider | string | The Widevine provider. |
widevine_pssh | string | A Widevine PSSH string. |
playready_url | string | The PlayReady licensing authority URL. |
playready_pssh | string | A PlayReady PSSH string. |
playready_version | enum 4.0 4.1 4.2 4.3 |
The PlayReady version. |
fairplay_uri | string | The FairPlay URI for the HSL URI attribute. |
clearkey_pssh_version | integer | The PSSH box version for CENC. |
Encryptions
Example Encryptions Object
{
"uid": "package_hls",
"kind": "package",
"payload": {
"kind": "hls",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests",
"attributes": [
{
"name": "ContentType",
"value": "application/x-mpegURL"
}
]
},
"file_pattern": "master_manifest.m3u8",
"segmentation_mode": "segmented_ts",
"segment_duration_sec": "180",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_media",
"attributes": [
{
"name": "ContentType",
"value": "video/MP2T"
}
]
},
"media_file_pattern": "{source_basename}.ts",
"hls": {
"media_playlist_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
}
},
"encryptions": [
{
"id": "id_playready_encryption",
"enabled": true,
"schema": "mpeg-cenc",
"drm": [
"playready"
],
"key_id": "[hex sequence]",
"key": "[hex sequence]",
"content_id": "2a",
"playready_pssh": "[base-64 encoded pssh]"
},
{
"id": "id_fairplay_encryption",
"enabled": true,
"schema": "mpeg-cenc",
"drm": [
"fairplay"
],
"key_id": "[hex sequence]",
"key": "[hex sequence]",
"content_id": "3a",
"fairplay_uri": "skd://your_fairplay_skd_uri"
}
]
}
}
package.encryptions
Name | Type | Description |
---|---|---|
id | string | Encryption id, used for referencing encryptions |
enabled | boolean | Enable or disable encryption. |
schema | enum aes-128-cbc sample-aes mpeg-cenc mpeg-cbc1 mpeg-cens mpeg-cbcs none |
The chosen encryption schema. Encryption keys will be generated by Hybrik. default: aes-128-cbc |
drm | array enum playready fairplay widevine clearkey |
An array specifying the types of DRM that will be used. |
rotation | integer |
The encryption rotation interval. Every N file segments, a new encryption key will be generated. default: 12 |
external_key_file | boolean | Use the externally created key file with path from key_location. Do not store key file. |
key_location | object | The optional key location. This will override any location defined within the parent of this task. |
key_file_pattern | string |
This describes the key file name. Placeholders such as {source_basename} for source file name are supported. |
key | string | The actual key, if pre-supplied. |
iv | string | The initialization vector, if pre-supplied. |
key_id | string | The Key ID. Used for MPEG-CENC only. |
key_seed | string | The Key seed. |
content_id | string | The Content ID. Used for MPEG-CENC only. |
widevine_provider | string | The Widevine provider. |
widevine_pssh | string | A Widevine PSSH string. |
playready_url | string | The PlayReady licensing authority URL. |
playready_pssh | string | A PlayReady PSSH string. |
playready_version | enum 4.0 4.1 4.2 4.3 |
The PlayReady version. |
fairplay_uri | string | The FairPlay URI for the HSL URI attribute. |
clearkey_pssh_version | integer | The PSSH box version for CENC. |
Media_file_extensions
Example Media_file_extensions Object
{
"uid": "package_hls",
"kind": "package",
"payload": {
"kind": "hls",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
},
"file_pattern": "master_manifest.m3u8",
"segmentation_mode": "segmented_ts",
"segment_duration_sec": "6",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_media"
},
"media_file_pattern": "{source_basename}.ts",
"hls": {
"media_playlist_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
}
},
"media_file_extensions": {
"audio": ".ts",
"video": ".ts",
"subtitle": ".vtt"
}
}
}
package.media_file_extensions
Name | Type | Description |
---|---|---|
audio | string | The extension to be used for audio files. |
video | string | The extension to be used for video files. |
subtitle | string | The extension to be used for subtitle files. |
Closed_captions
Example Closed_captions Object
{
"uid": "package_hls",
"kind": "package",
"payload": {
"kind": "hls",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
},
"file_pattern": "master_manifest.m3u8",
"segmentation_mode": "segmented_ts",
"segment_duration_sec": "6",
"force_original_media": false,
"media_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_media"
},
"media_file_pattern": "{source_basename}.ts",
"hls": {
"media_playlist_location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder/hls_manifests"
}
},
"closed_captions": [
{
"cc_index": "1",
"language": "es",
"is_defualt": true,
"is_autoselect": true,
"group_id": "Spanish"
}
]
}
}
package.closed_captions
Name | Type | Description |
---|---|---|
cc_index | integer | The index value for the closed-caption track. minimum: 1 maximum: 4 |
language | string | The language of the closed-caption track. |
is_default | boolean | Setting to specify that this track is the default closed-caption track. |
is_autoselect | boolean | Setting to automatically auto-select this track. |
group_id | string | The GroupID of the track. |
Folder Enum Task
Folder_enum
Example Folder_enum Object
{
"name": "Hybrik Folder Enumeration Example",
"payload": {
"elements": [
{
"uid": "folder_enum_task",
"kind": "folder_enum",
"payload": {
"source": {
"storage_provider": "s3",
"url": "s3://my_source_bucket/my_source_folder"
},
"settings": {
"pattern_matching": "wildcard",
"wildcard": "*",
"recursive": true
}
}
},
{
"uid": "copy_task",
"kind": "copy",
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "s3://my_destination_bucket/my_destination_folder"
},
"existing_files": "replace",
"file_pattern": "{source_name}"
}
}
}
],
"connections": [
{
"from": [
{
"element": "folder_enum_task"
}
],
"to": {
"success": [
{
"element": "copy_task"
}
]
}
}
]
}
}
Hybrik allows automating media workflows via the API or Watchfolders. There are times, however, when all you want to do is process an entire folder of content. For this, we have the Folder Enumeration Task (known as Folder_enum). Folder_enum looks like a regular Hybrik job, but instead of a Source Element naming a specific file, it has a Folder_enum Element pointing to a folder location. This job will actually create multiple jobs, one for each source file in the specified folder location. You can control the types of files that get selected, as well as whether to move recursively through sub folders. You can trigger hundreds (or thousands) of jobs with a single submission with the Folder_enum task. In addition to triggering transcode jobs, the Folder_enum task is handy for moving large amounts of data quickly between S3 locations. If you use Folder_enum as the source element for a copy job, then it will generate a copy job for each file. These jobs will be allocated to the available machines, which can vastly accelerate the movement of data by spreading the transfer load across many machines. This can move data hundreds of times faster (depending on the number of machines assigned) than a standard S3 data move.
folder_enum
Name | Type | Description |
---|---|---|
source | object | The source location. |
settings | object | The recursion and pattern matching settings for the folder enumeration. |
Source
Example Source Object
{
"uid": "folder_enum_task",
"kind": "folder_enum",
"payload": {
"source": {
"storage_provider": "s3",
"url": "s3://my_source_bucket/my_source_folder"
},
"settings": {
"pattern_matching": "wildcard",
"wildcard": "*",
"recursive": true
}
}
}
folder_enum.source
Name | Type | Description |
---|---|---|
storage_provider | enum ftp sftp s3 gs box akamains swift swiftstack http relative internal |
Select the type of location, such as S3, FTP, etc. default: s3 |
path | string |
Describes the path of the location. Full URLs are required. Example: s3://my_bucket/my_path |
access | anyOf ftp sftp http s3 gs box akamains swift swiftstack ssh |
This contains credentials granting access to the location. |
permissions | anyOf | This contains access permissions to be applied to objects in the location upon creation. |
encryption | anyOf | This contains encryption settings to be applied to objects on retrieval or creation. |
attributes | array | This contains attributes, such as CacheControl headers, to be applied to objects in the location upon creation. |
Attributes
Example Attributes Object
{
"uid": "folder_enum_task",
"kind": "folder_enum",
"payload": {
"source": {
"storage_provider": "s3",
"url": "s3://my_source_bucket/my_source_folder",
"attributes": [
{
"name": "Cache-Control",
"value": "max-age=3600"
}
]
},
"settings": {
"pattern_matching": "wildcard",
"wildcard": "*",
"recursive": true
}
}
}
folder_enum.source.attributes
Name | Type | Description |
---|---|---|
name | string | The name component of a name/value pair. |
value | string | The value component of a name/value pair. |
Settings
Example Settings Object
{
"uid": "folder_enum_task",
"kind": "folder_enum",
"payload": {
"source": {
"storage_provider": "s3",
"url": "s3://my_source_bucket/my_source_folder"
},
"settings": {
"pattern_matching": "wildcard",
"wildcard": "*.mov",
"recursive": false
}
}
}
folder_enum.settings
Name | Type | Description |
---|---|---|
recursive | boolean |
Determines whether sub-folders should be scanned recursively for content. |
pattern_matching | enum wildcard regex |
The type of pattern matching to use. default: wildcard |
wildcard | string |
The wildcard value to search for. For example .mov will only copy mov files. *default: * |
regex | string |
The regular expression to be used for pattern matching. |
files_per_job | integer | The maximum number of files to be copied in each job. The default is 1 file per job. minimum: 1 maximum: 100 |
Watchfolder Task
Watchfolder
Example Watchfolder Object
{
"name": "Hybrik Watchfolder Example",
"payload": {
"elements": [
{
"uid": "watchfolder_source",
"kind": "watchfolder",
"task": {
"tags": [
"WATCH_FOLDER"
]
},
"payload": {
"source": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_watchfolder"
},
"settings": {
"key": "watch_folder",
"interval_sec": 5,
"pattern_matching": "wildcard",
"wildcard": "*",
"recursive": true,
"process_existing_files": true
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264",
"profile": "high",
"level": "4.0",
"frame_rate": "24000/1001"
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 128
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "watchfolder_source"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
Hybrik supports complete automated control through its REST-based API. There are many production workflows, however, that can be managed through the use of a simple Watchfolder without doing any programming. The goal of a Watchfolder Task is simple – when a new file shows up in the watchfolder location, trigger a new job. A Watchfolder Job looks very similar to a standard encoding job – the difference is that instead of specifying a particular source file, you are just specifying a folder where sources will be found.
watchfolder
Name | Type | Description |
---|---|---|
source | object | The location of the watchfolder. |
settings | object | The settings for the watchfolder. |
Source
Example Source Object
{
"uid": "watchfolder_source",
"kind": "watchfolder",
"payload": {
"source": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_watchfolder"
},
"settings": {
"key": "watch_folder",
"interval_sec": 5,
"pattern_matching": "wildcard",
"wildcard": "*",
"recursive": true
}
}
}
watchfolder.source
Name | Type | Description |
---|---|---|
storage_provider | enum ftp sftp s3 gs box akamains swift swiftstack http relative internal |
Select the type of location, such as S3, FTP, etc. default: s3 |
path | string |
Describes the path of the location. Full URLs are required. Example: s3://my_bucket/my_path |
access | anyOf ftp sftp http s3 gs box akamains swift swiftstack ssh |
This contains credentials granting access to the location. |
permissions | anyOf | This contains access permissions to be applied to objects in the location upon creation. |
encryption | anyOf | This contains encryption settings to be applied to objects on retrieval or creation. |
attributes | array | This contains attributes, such as CacheControl headers, to be applied to objects in the location upon creation. |
Attributes
Example Attributes Object
{
"payload": {
"source": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_watchfolder",
"attributes": [
{
"name": "Cache-Control",
"value": "max-age=3600"
}
]
},
"settings": {
"key": "watch_folder",
"wildcard": "*",
"recursive": true
}
}
}
watchfolder.source.attributes
Name | Type | Description |
---|---|---|
name | string | The name component of a name/value pair. |
value | string | The value component of a name/value pair. |
Settings
Example Settings Object
{
"uid": "watchfolder_source",
"kind": "watchfolder",
"task": {
"tags": [
"WATCH_FOLDER"
]
},
"payload": {
"source": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_watchfolder"
},
"settings": {
"key": "watch_folder",
"interval_sec": 5,
"pattern_matching": "wildcard",
"wildcard": "*",
"recursive": true,
"process_existing_files": true
}
}
}
watchfolder.settings
Name | Type | Description |
---|---|---|
key | string | A unique key to identify this watchfolder for tracking processed source files. |
watch_items_persistence | enum tracked untracked |
When a watchfolder is stopped and later re-started, this parameter indicates whether Hybrik should track all files previously processed by the watchfolder. |
interval_sec | integer |
Polling interval in seconds for the watch folder. default: 300 |
recursive | boolean |
Recursively watch the folder and its sub-folders. |
process_existing_files | boolean |
When the watchfolder process initiates, there may be contents already existing in the target location. Setting this value to true will indicate that pre-existing files should trigger new jobs. |
pattern_matching | enum wildcard regex |
Whether to use a simple file-system style wildcard to filter incoming files, or to use a regular expression. default: wildcard |
wildcard | string |
Which expression to use for pattern matching. default: * |
regex | string |
A regular expression may be used to match only certain file names for processing. |
DPP Packager Task
Dpp_packager
Example Dpp_packager Object
{
"name": "Hybrik DPP Packager Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4"
}
}
},
{
"uid": "dpp_package_task",
"kind": "dpp_packager",
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_destination"
},
"file_pattern": "{source_basename}_dpp.mxf"
},
"dpp_schema": "d10"
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "dpp_package_task"
}
]
}
}
]
}
}
The Digital Production Partnership (DPP) is an non-profit company originally created by the major British broadcaster to define standards and guidelines for the television production standard. Hybrik supports the AS-11 DPP standard, which includes both media creation and metadata packaging. The DPP Package step is run after the Transcode Task.
dpp_packager
Name | Type | Description |
---|---|---|
options | object | The options for the DPP packager. |
target | object | The target for the DPP package. |
dpp_schema | enum d10 op1a rdd9 as02 |
Specifies which DPP schema will be used. |
as11_options | object |
Options
Example Options Object
{
"uid": "dpp_package_task",
"kind": "dpp_packager",
"options": {
"delete_sources": true
},
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_destination"
},
"file_pattern": "{source_basename}_dpp.mxf"
},
"dpp_schema": "d10"
}
}
dpp_packager.options
Name | Type | Description |
---|---|---|
delete_sources | boolean |
Delete the task's source files upon successful completion of the task. |
Target
Example Target Object
{
"uid": "dpp_package_task",
"kind": "dpp_packager",
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_destination"
},
"file_pattern": "{source_basename}_dpp.mxf"
},
"dpp_schema": "d10"
}
}
dpp_packager.target
Name | Type | Description |
---|---|---|
location | object | The result will be copied to this location. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
existing_files | enum delete_and_replace replace replace_late rename_new rename_org fail |
The desired behavior when a target file already exists. "replace": will delete the original file and write the new one. "rename_new": gives the new file a different auto-generated name. "rename_org": renames the original file. Note that renaming the original file may not be possible depending on the target location. "delete_and_replace": attempts to immediately delete the original file. This will allow for fast failure in the case of inadequate permissions. "replace_late": does not attempt to delete the original -- simply executes a write. default: fail |
As11_options
Example As11_options Object
{
"uid": "dpp_packager",
"kind": "dpp_packager",
"task": {
"retry_method": "fail"
},
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"file_pattern": "tos_mxfdpp_as11_metadata.mxf"
},
"dpp_schema": "op1a",
"as11_options": {
"core": {
"SeriesTitle": "Your Title Here",
"ProgrammeTitle": "Your Title Here",
"EpisodeTitleNumber": "3",
"ShimName": "UK DPP HD",
"ShimVersion": "1.1",
"AudioTrackLayout": 3,
"PrimaryAudioLanguage": "eng",
"ClosedCaptionsPresent": false
},
"ukdpp": {
"ProductionNumber": "C5/30571/0001A",
"Synopsis": "Your Synopsis Here",
"Originator": "Originator or Studio Name Here",
"CopyrightYear": 2010,
"PictureRatio": "16:9",
"Distributor": "Distributor Name Here",
"ThreeD": false,
"ProductPlacement": false,
"PSEPass": 1,
"PSEManufacturer": "Harding FPA",
"PSEVersion": "3.4.0",
"SecondaryAudioLanguage": "zxx",
"TertiaryAudioLanguage": "zxx",
"AudioLoudnessStandard": 0,
"LineUpStart": "09:59:30:00",
"IdentClockStart": "09:59:50:00",
"AudioDescriptionPresent": false,
"OpenCaptionsPresent": false,
"SigningPresent": 1,
"CompletionDate": "2020-10-10",
"ContactEmail": "contact@emailaddress.com",
"ContactTelephoneNumber": "02035403630",
"TotalNumberOfParts": 3,
"TotalProgrammeDuration": "00:04:30:00"
},
"segmentation": [
{
"start": "10:00:00:00",
"duration": "00:01:00:00"
},
{
"start": "10:01:00:00",
"duration": "00:02:00:00"
},
{
"start": "10:03:00:00",
"duration": "00:01:30:00"
}
]
}
}
}
dpp_packager.as11_options
Name | Type | Description |
---|---|---|
core | object | |
ukdpp | object | |
segmentation | array |
Core
Example Core Object
{
"uid": "dpp_packager",
"kind": "dpp_packager",
"task": {
"retry_method": "fail"
},
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"file_pattern": "tos_mxfdpp_as11_metadata.mxf"
},
"dpp_schema": "op1a",
"as11_options": {
"core": {
"SeriesTitle": "Your Title Here",
"ProgrammeTitle": "Your Title Here",
"EpisodeTitleNumber": "3",
"ShimName": "UK DPP HD",
"ShimVersion": "1.1",
"AudioTrackLayout": 3,
"PrimaryAudioLanguage": "eng",
"ClosedCaptionsPresent": false
},
"ukdpp": {},
"segmentation": []
}
}
}
dpp_packager.as11_options.core
Name | Type | Description |
---|---|---|
SeriesTitle | string | |
ProgrammeTitle | string | |
EpisodeTitleNumber | string | |
ShimName | string | |
ShimVersion | string | |
AudioTrackLayout | integer | |
PrimaryAudioLanguage | string | |
ClosedCaptionsPresent | boolean | |
ClosedCaptionsType | integer | |
ClosedCaptionsLanguage | string |
Ukdpp
Example Ukdpp Object
{
"uid": "dpp_packager",
"kind": "dpp_packager",
"task": {
"retry_method": "fail"
},
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"file_pattern": "tos_mxfdpp_as11_metadata.mxf"
},
"dpp_schema": "op1a",
"as11_options": {
"core": {},
"ukdpp": {
"ProductionNumber": "C5/30571/0001A",
"Synopsis": "Your Synopsis Here",
"Originator": "Originator or Studio Name Here",
"CopyrightYear": 2010,
"PictureRatio": "16:9",
"Distributor": "Distributor Name Here",
"ThreeD": false,
"ProductPlacement": false,
"PSEPass": 1,
"PSEManufacturer": "Harding FPA",
"PSEVersion": "3.4.0",
"SecondaryAudioLanguage": "zxx",
"TertiaryAudioLanguage": "zxx",
"AudioLoudnessStandard": 0,
"LineUpStart": "09:59:30:00",
"IdentClockStart": "09:59:50:00",
"AudioDescriptionPresent": false,
"OpenCaptionsPresent": false,
"SigningPresent": 1,
"CompletionDate": "2020-10-10",
"ContactEmail": "contact@emailaddress.com",
"ContactTelephoneNumber": "02035403630",
"TotalNumberOfParts": 3,
"TotalProgrammeDuration": "00:04:30:00"
},
"segmentation": []
}
}
}
dpp_packager.as11_options.ukdpp
Name | Type | Description |
---|---|---|
ProductionNumber | string | |
Synopsis | string | |
Originator | string | |
CopyrightYear | integer | |
OtherIdentifier | string | |
OtherIdentifierType | string | |
Genre | string | |
Distributor | string | |
PictureRatio | string | |
ThreeD | boolean | |
ThreeDType | integer | |
ProductPlacement | boolean | |
PSEPass | integer | |
PSEManufacturer | string | |
PSEVersion | string | |
VideoComments | string | |
SecondaryAudioLanguage | string | |
TertiaryAudioLanguage | string | |
AudioLoudnessStandard | integer | |
AudioComments | string | |
LineUpStart | string | |
IdentClockStart | string | |
TotalNumberOfParts | integer | |
TotalProgrammeDuration | string | |
AudioDescriptionPresent | boolean | |
AudioDescriptionType | integer | |
OpenCaptionsPresent | boolean | |
OpenCaptionsType | integer | |
OpenCaptionsLanguage | string | |
SigningPresent | integer | |
SignLanguage | integer | |
CompletionDate | string | |
TextlessElementsExist | boolean | |
ProgrammeHasText | boolean | |
ProgrammeTextLanguage | string | |
ContactEmail | string | |
ContactTelephoneNumber | string |
Segmentation
Example Segmentation Object
{
"uid": "dpp_packager",
"kind": "dpp_packager",
"task": {
"retry_method": "fail"
},
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"file_pattern": "tos_mxfdpp_as11_metadata.mxf"
},
"dpp_schema": "op1a",
"as11_options": {
"core": {},
"ukdpp": {},
"segmentation": [
{
"start": "10:00:00:00",
"duration": "00:01:00:00"
},
{
"start": "10:01:00:00",
"duration": "00:02:00:00"
},
{
"start": "10:03:00:00",
"duration": "00:01:30:00"
}
]
}
}
}
dpp_packager.as11_options.segmentation
Name | Type | Description |
---|---|---|
start | string | |
duration | string |
Dolby Audio Encoder Task
Dolby_audio_encoder
Example Dolby_audio_encoder Object
{
"uid": "dolby_audio_encoder",
"kind": "dolby_audio_encoder",
"payload": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"music_mode": true,
"targets": [
{
"file_pattern": "{{file_pattern}}.eac3",
"existing_files": "replace",
"container": {
"kind": "elementary"
},
"audio": [
{
"codec": "ddp_joc",
"bitrate_kb": 448
}
]
}
]
}
}
dolby_audio_encoder
Name | Type | Description |
---|---|---|
location | object | A location that overrides any location defined within the parents of this encode target. |
music_mode | boolean | Dolby Atmos Music encoding. Valid for codecs: ddp_joc and ac4_ims. Set to true for encoding music content. |
targets | array | An array of target outputs. Each target specifies a location, container, video, and audio properties. |
Targets
Example Targets Object
{
"uid": "dolby_audio_encoder",
"kind": "dolby_audio_encoder",
"payload": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"music_mode": true,
"targets": [
{
"file_pattern": "{{file_pattern}}.eac3",
"existing_files": "replace",
"container": {
"kind": "elementary"
},
"audio": [
{
"codec": "ddp_joc",
"bitrate_kb": 448
}
]
}
]
}
}
dolby_audio_encoder.targets
Name | Type | Description |
---|---|---|
uid | string | A UID (arbitrary string) to allow referencing this target. This UID may be used, for example, to specify a target for preview generation. User supplied, must be unique within an array of targets. |
location | object | A location that overrides any location defined within the parents of this encode target. |
file_pattern | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
existing_files | enum delete_and_replace replace replace_late rename_new rename_org fail |
The desired behavior when a target file already exists. "replace": will delete the original file and write the new one. "rename_new": gives the new file a different auto-generated name. "rename_org": renames the original file. Note that renaming the original file may not be possible depending on the target location. "delete_and_replace": attempts to immediately delete the original file. This will allow for fast failure in the case of inadequate permissions. "replace_late": does not attempt to delete the original -- simply executes a write. default: fail |
trim | anyOf by_sec_in_out by_sec_in_dur by_timecode by_asset_timecode by_frame_nr by_section_nr by_media_track by_nothing |
Object defining the type of trim operation to perform on an asset. |
prepend_silence_sec | number | Duration of silence prepended to the output. Provided as seconds. |
append_silence_sec | number | Duration of silence appended to the output. Provided as seconds. |
no_ffoa | boolean | Don't include FFOA in the target output. Currently, valid only for Dolby Atmos master conversion. |
container | object | |
audio | array |
Container
Example Container Object
{
"uid": "dolby_audio_encoder",
"kind": "dolby_audio_encoder",
"payload": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"music_mode": true,
"targets": [
{
"file_pattern": "{{file_pattern}}.eac3",
"existing_files": "replace",
"container": {
"kind": "elementary"
},
"audio": [
{
"codec": "ddp_joc",
"bitrate_kb": 448
}
]
}
]
}
}
dolby_audio_encoder.targets.container
Name | Type | Description |
---|---|---|
kind | enum elementary mp4 mpegts mpeg2ts adm damf mxf_iab fmp4 loas |
The container (i.e. multiplexing) format. |
brand | string | Setting the ftyp of a mp4/mov/3g file. Example: '3gp5'. |
compatible_brands | array | Appending to compatible ftyp(s) of a mp4/mov/3g file. Example: '["3gp5"]'. |
forced_compatible_brands | array | Replacing the compatible ftyp(s) of a mp4/mov/3g file. Example: '["3gp5"]'. |
Audio
Example Audio Object
{
"uid": "dolby_audio_encoder",
"kind": "dolby_audio_encoder",
"payload": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"targets": [
{
"file_pattern": "{{file_pattern}}.eac3",
"existing_files": "replace",
"container": {
"kind": "elementary"
},
"audio": [
{
"codec": "ddp_joc",
"bitrate_kb": 448
}
]
}
]
}
}
dolby_audio_encoder.targets.audio
Name | Type | Description |
---|---|---|
codec | enum eac3 ddp_joc ddp_joc_cbi ddp_5.1_atmos_downmix ddp ac4_ims |
The audio codec to use. |
channels | integer |
The number of audio channels. minimum: 1 maximum: 16 |
bitrate_kb | number |
The audio bitrate in kilobits per second. minimum: 1 maximum: 1024 |
layer_id | string | This indicates which Layer this tracks belongs to. For example, this allows bundling one video layer and multiple audio layers with same bitrates but different languages. |
layer_affinities | array | This indicates which other layers this layer can be combined with. For example, to combine audio and video layers. |
frame_rate | enum auto 23.976 24 25 29.97 30 |
The frame rate to be associated with the audio output. |
sample_rate | integer |
The audio sample rate in Hz. Currently, valid only for Dolby Atmos master conversion. |
downmix_config | enum off mono stereo 5.1 |
The channel configuration to target when downmixing. |
language | string | The language tag to include in the AC-4 stream, for example "eng" or "eng-US". Default is empty. The ISO 639-2 and ISO 639-3 special language codes MIS, MUL, UND, and ZXX, and the language code QAA are not allowed. If the tag is a 3 letter code according to ISO 639-2 that is not compliant to BCP 47, the tag will be converted to its 2 letter equivalent according to ISO 639-1. |
warp_mode | enum normal warping pl2x loro |
Edit warpMode metadata in an ADM BWF file. When this option is used, all other target convertion options are ignored. Format change is not possible. |
iframe_interval | number |
I-frame interval expressed in seconds. minimum: 0.5 maximum: 10 |
ipf_interval | number |
Immediate Playout Frame interval expressed in seconds. minimum: 1 maximum: 60 |
filters | array | |
source | array | Mapping audio tracks. |
Dolby Audio Filters
Example Dolby Audio Filters Object
{
"uid": "dolby_audio_encoder",
"kind": "dolby_audio_encoder",
"payload": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"targets": [
{
"file_pattern": "{{file_pattern}}.eac3",
"existing_files": "replace",
"container": {
"kind": "elementary"
},
"audio": [
{
"codec": "ddp_joc",
"bitrate_kb": 448,
"filters": [
{
"audio_filter": {
"kind": "drc",
"payload": {
"line_mode_drc_profile": "film_light",
"rf_mode_drc_profile": "film_light"
}
}
},
{
"audio_filter": {
"kind": "downmix",
"payload": {
"loro_center_mix_level": -3,
"loro_surround_mix_level": -3,
"ltrt_center_mix_level": -3,
"ltrt_surround_mix_level": -3,
"preferred_downmix_mode": "loro"
}
}
}
]
}
]
}
]
}
}
dolby_audio_encoder.targets.audio.filters
Name | Type | Description |
---|---|---|
audio_filter | object |
Audio_filter
dolby_audio_encoder.targets.audio.filters.audio_filter
Example Audio_filter Object
{
"uid": "dolby_audio_encoder",
"kind": "dolby_audio_encoder",
"payload": {
"location": {
"storage_provider": "s3",
"path": "{{destination_path}}"
},
"targets": [
{
"file_pattern": "{{file_pattern}}.eac3",
"existing_files": "replace",
"container": {
"kind": "elementary"
},
"audio": [
{
"codec": "ddp_joc",
"bitrate_kb": 448,
"filters": [
{
"audio_filter": {
"kind": "drc",
"payload": {
"line_mode_drc_profile": "film_light",
"rf_mode_drc_profile": "film_light"
}
}
},
{
"audio_filter": {
"kind": "downmix",
"payload": {
"loro_center_mix_level": "-3",
"loro_surround_mix_level": "-3",
"ltrt_center_mix_level": "-3",
"ltrt_surround_mix_level": "-3",
"preferred_downmix_mode": "loro"
}
}
}
]
}
]
}
]
}
}
Name | Type | Description |
---|---|---|
kind | enum xml loudness_correction loudness_measurement drc downmix custom_trims |
default: xml |
payload | anyOf |
BIF Creator Task
Bif_creator
Example Bif_creator Object
{
"name": "Hybrik BIF Creator Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_converted.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"width": 1280,
"height": 720,
"codec": "h264",
"profile": "high",
"level": "4.0",
"frame_rate": 23.976
},
"audio": [
{
"codec": "aac",
"channels": 2,
"sample_rate": 48000,
"sample_size": 16,
"bitrate_kb": 128
}
]
}
]
}
},
{
"uid": "bif_creator_task",
"kind": "bif_creator",
"payload": {
"source_stream_selection": "highest",
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"base_name": "{source_basename}_BIF",
"include_in_result": true,
"existing_files": "replace"
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
},
{
"from": [
{
"element": "transcode_task"
}
],
"to": {
"success": [
{
"element": "bif_creator_task"
}
]
}
}
]
}
}
Hybrik supports the creation of the BIF (Base Index Frames) file which is used with the Roku family of devices. The BIF file allows for specialize trick modes such as fast-forward and rewind for media being played on Roku. The BIF file is generally delivered as a “sidecar” file in addition to the main media files.
bif_creator
Name | Type | Description |
---|---|---|
source_stream_selection | enum all highest middle first |
Selects the video stream that will be used for BIF creation. |
timestamp_multiplier | integer | An option passed to biftool when image sequence is used as a source. |
base_name | string |
This describes the target file name. Placeholders such as {source_basename} for source file name are supported. default: {source_basename} |
location | object | The target output location for the BIF stream. |
existing_files | enum delete_and_replace replace replace_late rename_new rename_org fail |
The desired behavior when a target file already exists. "replace": will delete the original file and write the new one. "rename_new": gives the new file a different auto-generated name. "rename_org": renames the original file. Note that renaming the original file may not be possible depending on the target location. "delete_and_replace": attempts to immediately delete the original file. This will allow for fast failure in the case of inadequate permissions. "replace_late": does not attempt to delete the original -- simply executes a write. default: fail |
include_in_result | boolean | Includes the BIF data in the results document. |
Common Elements
Location
Example Location Object
{
"name": "Hybrik Locations",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"access": {
"credentials_key": "my_aws_creds",
"max_cross_region_mb": -1
}
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_other_bucket/my_output_folder",
"access": {
"credentials_key": "my_other_aws_creds"
}
},
"targets": [
]
}
}
],
"connections": [
]
}
}
There are many places in the Hybrik API where you refer to file locations. The basic parameters you need to provide are where the storage is located (such as Amazon S3 or Google Storage) and a path. There are a number of advanced parameters you can provide, such as access credentials, encryption settings, etc.
Name | Type | Description |
---|---|---|
storage_provider | enum ftp sftp s3 gs as box akamains swift swiftstack http relative |
Select the type of location, such as S3, FTP, etc. default: s3 |
url | string |
Describes the location of a single file. If you need to refer to a folder, use the path parameter instead. The full pathname of the file must be used. Example: s3://my_bucket/my_path/my_file.mov |
path | string |
Describes the location of a folder. If you need to refer to a file, use the url parameter instead. The full pathname of the folder must be used. Example: s3://my_bucket/my_path |
access | anyOf ftp sftp http s3 gs as box akamains swift swiftstack ssh |
This contains credentials granting access to the location. |
permissions | anyOf | This contains access permissions to be applied to objects in the location upon creation. |
encryption | anyOf | This contains encryption settings to be applied to objects on retrieval or creation. |
attributes | array | This contains attributes, such as CacheControl headers, to be applied to objects in the location upon creation. |
FTP
Example Ftp Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "ftp",
"url": "ftp://my_server/my_folder/my_file.mp4",
"access": {
"username": "my_username",
"password": "my_password"
}
}
}
}
Name | Type | Description |
---|---|---|
username | string |
The login user name. |
password | string |
The login password. |
passive_mode | boolean | Sets FTP Passive Mode. |
credentials_key | string |
Use API Key to reference credentials inside the Hybrik Credentials Vault. |
SFTP
Example Sftp Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "sftp",
"url": "sftp://my_server/my_folder/my_file.mp4",
"access": {
"username": "my_username",
"password": "my_password"
}
}
}
}
Name | Type | Description |
---|---|---|
username | string |
The login user name. |
password | string |
The login password. |
key | string |
SSH private key. |
credentials_key | string |
Use API Key to reference credentials inside the Hybrik Credentials Vault. |
HTTP
Example Http Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "http",
"url": "https://my_server/my_folder/my_file.mp4",
"access": {
"username": "my_username",
"password": "my_password"
}
}
}
}
Name | Type | Description |
---|---|---|
username | string |
The HTTP basic authentication user name. |
password | string |
The HTTP basic authentication password. |
S3
Example Amazon AWS S3 Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_folder/my_file.mp4",
"access": {
"credentials_key": "my_other_aws_creds"
}
}
}
}
Name | Type | Description |
---|---|---|
credentials_key | string |
Use the specified credentials from the Credentials Vault |
max_cross_region_mb | number |
This sets the maximum amount of data (in MB) that can be transferred across regions. The default is set to 100MB to avoid excessive inter-region transfer costs. Set this to -1 to allow unlimited transfers. |
GS
Example Google Cloud Platform GS Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "gs",
"url": "gs://my_bucket/my_folder/my_file.mp4",
"access": {
"credentials_key": "my_google_creds"
}
}
}
}
Name | Type | Description |
---|---|---|
credentials_key | string |
Use the specified credentials from the Credentials Vault |
max_cross_region_mb | number |
This sets the maximum amount of data (in MB) that can be transferred across regions. The default is set to 100MB to avoid excessive inter-region transfer costs. Set this to -1 to allow unlimited transfers. |
AS
Example Azure Storage Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "as",
"url": "as://my_bucket/my_folder/my_file.mp4",
"access": {
"credentials_key": "my_azure_creds"
}
}
}
}
Name | Type | Description |
---|---|---|
credentials_key | string |
Use the specified credentials from the Credentials Vault |
max_cross_region_mb | number |
This sets the maximum amount of data (in MB) that can be transferred across regions. The default is set to 100MB to avoid excessive inter-region transfer costs. Set this to -1 to allow unlimited transfers. |
Box
Example Box Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "box",
"url": "box://132242089072",
"access": {
"credentials_key": "my_box_creds",
"refresh_token": "28AeDxCqGkyCnlKjltBQAGCjHZvMholuQKrErV8ctSnc555ARY4a33Y4ctdHfXs"
}
}
}
}
Name | Type | Description |
---|---|---|
credentials_key | string |
Use the specified credentials from the Credentials Vault. |
access_token | string |
The Box access token. |
refresh_token | string |
The Box refresh token. |
AkamaiNS
Example Akamains Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "akamains",
"url": "akamains://me.akamai.com",
"access": {
"keyname": "myKeyname",
"key": "mysecretKey",
"cp_code": "123456",
"api_host": "myhost.akamaihd.net",
}
}
}
}
Name | Type | Description |
---|---|---|
keyname | string |
The public key name. |
key | string |
The secret key. |
cp_code | string |
The secret key. |
api_host | string |
The API host |
credentials_key | string |
Use API Key to reference credentials inside the Hybrik Credentials Vault. |
Swift
Example Swift Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "swift",
"url": "https://my_swift_server/my_folder/my_file.mp4",
"access": {
"username": "my_username",
"password": "my_password"
}
}
}
}
Name | Type | Description |
---|---|---|
username | string |
The authentication user name. |
password | string |
The authentication password. |
SwiftStack
Example Swiftstack Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "swiftstack",
"url": "s3://my_swiftstack_server/my_folder/my_file.mp4",
"access": {
"username": "my_username",
"password": "my_password"
}
}
}
}
Name | Type | Description |
---|---|---|
username | string |
The authentication user name. |
password | string |
The authentication password. |
SSH
Example Ssh Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "swiftstack",
"url": "s3://my_swiftstack_server/my_folder/my_file.mp4",
"access": {
"username": "my_username",
"password": "my_password"
}
}
}
}
Name | Type | Description |
---|---|---|
key | string |
SSH private/public key. |
Attributes
Example Attributes Object
{
"uid": "hls_single_ts",
"kind": "package",
"payload": {
"uid": "main_manifest",
"kind": "hls",
"location": {
"storage_provider": "s3",
"path": "{{destination_hls_manifests}}",
"attributes": [
{
"name": "ContentType",
"value": "application/x-mpegURL"
}
]
},
"file_pattern": "master_manifest.m3u8",
"segmentation_mode": "single_ts",
"media_location": {
"storage_provider": "s3",
"path": "{{destination_hls_media}}",
"attributes": [
{
"name": "ContentType",
"value": "video/MP2T"
}
]
},
"hls": {
"media_playlist_location": {
"storage_provider": "s3",
"path": "{{destination_hls_manifests}}"
}
}
}
}
Name | Type | Description |
---|---|---|
name | string | Name of the parameter. |
value | string | Value of the parameter. |
Trim
Example Trim Object
{
"name": "Hybrik Trim",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"trim": {
"inpoint_sec": 600,
"outpoint_sec": 720
}
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_other_bucket/my_output_folder",
"access": {
"credentials_key": "my_other_aws_creds"
}
},
"targets": [
]
}
}
],
"connections": [
]
}
}
Instead of using an entire file, you can just specify the portions of the file that you would like to use. You can specify trim locations in a number of different ways, including by in/out points in time, by in_points+duration, by timecode, etc. The example on the right trims the 2 minutes of content out of the source file starting at 10 minutes into the file.
By_sec_in_out
Example By_sec_in_out Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"trim": {
"inpoint_sec": 600,
"outpoint_sec": 720
}
}
}
}
Name | Type | Description |
---|---|---|
inpoint_sec | number | Start the trimmed section at this point in the source. |
outpoint_sec | number | End the trimmed section at this point in the source. |
precision | enum sample frame |
Whether to trim at audio-sample boundaries or at video-frame boundaries |
sequence_timing | enum relative absolute |
Whether the specified in- and out-values are relative to the start of the source at 0 seconds, or absolute based on timecode of the source. |
By_sec_in_dur
Example By_sec_in_dur Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"trim": {
"inpoint_sec": 600,
"duration": 120
}
}
}
}
Name | Type | Description |
---|---|---|
inpoint_sec | number | Start the trimmed section at this point in the source. |
duration_sec | number | Set the duration in seconds of the trimmed section. |
By_timecode
Example By_timecode Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"trim": {
"inpoint_tc": "00:14:00:23",
"outpoint_tc": "00:18:34:12"
}
}
}
}
Name | Type | Description |
---|---|---|
inpoint_tc | string | Start the trimmed section at this point in the source. default: 00:00:00:00 |
outpoint_tc | string | End the trimmed section at this point in the source. |
By_asset_timecode
Example By_asset_timecode Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"trim": {
"source_timecode_selector": "mxf",
"inpoint_asset_tc": "00:14:00:23",
"outpoint_asset_tc": "00:18:34:12"
}
}
}
}
Name | Type | Description |
---|---|---|
inpoint_asset_tc | string | The timecode of the inpoint. default: 00:00:00:00 |
outpoint_asset_tc | string | The timecode of the outpoint. |
source_timecode_selector | enum first highest lowest mxf gop sdti smpte material_package source_package |
The location of the timecode data to be used. default: first |
timecode_format | enum df ndf auto |
Whether to use drop-frame or non-drop-frame timecode. |
By_frame_nr
Example By_frame_nr Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"trim": {
"inpoint_frame": 1239,
"outpoint_frame": 2572
}
}
}
}
Name | Type | Description |
---|---|---|
inpoint_frame | integer | The inpoint frame number. |
outpoint_frame | integer | The outpoint frame number. |
By_section_nr
Example By_section_nr Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"trim": {
"start_section": 4,
"end_section": 12
}
}
}
}
Name | Type | Description |
---|---|---|
start_section | integer | The inpoint section. |
end_section | integer | The outpoint section. |
By_media_track
Example By_media_track Object
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4",
"trim": {
"inpoint_sec": 362,
"reference_media_track": "audio"
}
}
}
}
Name | Type | Description |
---|---|---|
inpoint | enum track_begin reference_track_begin |
How to interpret the start point. This trim feature is intended to apply a limit to the duration of the combined source streams in one asset_version . It is designed to work only when there is no inpoint trim applied to the reference track (normally the video track). It can only be used when source components are accessed from their start. |
inpoint_sec | number | The inpoint in seconds. |
inpoint_frame | integer | The inpoint in frames. |
inpoint_tc | string | The inpoint in timecode default: 00:00:00:00 |
reference_media_track | enum video audio |
Which track to use as reference. |
Include_if_source_has
Example include_if_source_has Array
{
"name": "Hybrik Include_if_source_has Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"targets": [
{
"file_pattern": "{source_basename}_video.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"bitrate_mode": "vbr",
"bitrate_kb": 1000,
"max_bitrate_kb": 1150,
"height": 720
}
},
{
"include_if_source_has": [
"audio[0]"
],
"file_pattern": "{source_basename}_audio.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"audio": [
{
"channels": 2,
"codec": "aac_lc",
"sample_rate": 48000,
"bitrate_kb": 128
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
There are times where you want to conditionally include an output track depending on if a particular input track exists in the source. An example of this would be as follows: separate the outputs into a video file and an audio file, but only create the output audio file if the input has an audio track. The example on the right does this. The "include_if_source_has" is an array of the required input tracks. All tracks start with 0 numbering. Therefore audio[0] is the first audio track, video[0] is the first video track, and subtitles[0] is the first subtitle track. If you would like to verify the existence of a particular channel within an audio track, it would be represented like below (this references the 2nd channel of the 1st audio track:
"include_if_source_has": [
"components::audio_0::audio[1]"
]
Include_conditions
Example include_conditions Array
{
{
"name": "Hybrik Include_conditions Example",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://my_bucket/my_input_folder/my_file.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://my_bucket/my_output_folder"
},
"targets": [
{
"include_conditions": [
"source.video.width >= 1920"
],
"file_pattern": "{source_basename}_1920.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"bitrate_mode": "vbr",
"bitrate_kb": 3000,
"max_bitrate_kb": 3600,
"width": 1920
},
"audio": [
{
"channels": 2,
"codec": "aac_lc",
"sample_rate": 48000,
"bitrate_kb": 96
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
Where "include_if_source_has" sets conditions on whether a particular input track exists, the "include_conditions" can set more sophisticated conditions. You might want to specify, for example, that a particular output track should only be created when the input video is UHD. The "include_conditions" is an array, so that you can set a number of conditions that need to be true in order for the output section to be created.
A condition requiring the source width to be larger than 1920 pixels and the duration to be longer than 10 minutes would be:
"include_conditions": [
"source.video.width > 1920",
"source.video.duration_sec > 600"
]
API Sample JavaScript
Introduction
We've included complete API examples in JavaScript that show you how to perform basic functions like job submission, deletion, and display. We also included a set of example job JSON files that show different types of jobs. The example job JSON files may be submitted via API calls or they can be submitted via the Hybrik User Interface (UI). In the UI, go to the Active Jobs (or Queued, Completed, or Failed) window and select the "More Actions" dropdown menu. Choose the menu item "Submit Job JSON" and then choose the file you would like to submit. This allows you to test and verify your job JSON separately from your API integration.
The specific example API files are:
- hybrik_connector.js - Performs the basic connection to the Hybrik service.
- submit_job.js - Shows how to submit a single job.
- display_all_jobs.js - Shows how to display all of the jobs (queued, running, and completed).
Download the sample files here:
Sample JavaScript files
Submit A Job
submit_job.js
// import api config. Please edit api_config.js first before running this example.
var api_config = require('./api_config.js');
// the Hybrik Connector lib for submitting POST/GET/DELETE/PUT commands to Hybrik
var HybrikAPI = require('./hybrik_connector.js');
// import shared helpers
var shared = require('./shared.js');
var fs = require('fs');
if (process.argv.length < 3) {
console.log("Usage: node submit_job.js <job_json_file>")
return;
}
// construct an API access object using your Hybrik account info
var hybrik_api = new HybrikAPI(
api_config.HYBRIK_URL,
api_config.HYBRIK_COMPLIANCE_DATE,
api_config.HYBRIK_OAPI_KEY,
api_config.HYBRIK_OAPI_SECRET,
api_config.HYBRIK_AUTH_KEY,
api_config.HYBRIK_AUTH_SECRET
);
// read the JSON job file and parse into object
var job_json_file = process.argv[2];
var job = JSON.parse(fs.readFileSync(job_json_file, 'utf8'));
// submit the job
submitJob(job);
// function to submit a job through the Hybrik API
function submitJob(job) {
// connect to the API
return hybrik_api.connect()
.then(function () {
// submit the job by POSTing the '/jobs' command
return hybrik_api.call_api('POST', '/jobs', null, job)
.then(function (response) {
console.log('Job ID: ' + response.id);
return response.id;
})
.catch(function (err) {
// any error - let it be in the request/network etc. or as a result of the Hybrik API operation, goes here.
shared.print_error(err);
});
})
.catch(function (err) {
// any error - let it be in the request/network etc. or as a result of the Hybrik API operation, goes here.
shared.print_error(err);
});
}
This example will submit one job via the API to the Hybrik system. The job JSON is specified in the node.js arguments.
Display All Jobs
display_all_jobs.js
// import api config. Please edit api_config.js first before running this example.
var api_config = require('./api_config.js');
// the Hybrik Connector lib for submitting POST/GET/DELETE/PUT commands to Hybrik
var HybrikAPI = require('./hybrik_connector.js');
// import shared helpers
var shared = require('./shared.js');
// construct an API access object using your Hybrik account info
var hybrik_api = new HybrikAPI(
api_config.HYBRIK_URL,
api_config.HYBRIK_COMPLIANCE_DATE,
api_config.HYBRIK_OAPI_KEY,
api_config.HYBRIK_OAPI_SECRET,
api_config.HYBRIK_AUTH_KEY,
api_config.HYBRIK_AUTH_SECRET
);
// display the jobs
displayJobList();
function displayJobList() {
// connect to the Hybrik API
return hybrik_api.connect()
.then(function () {
// make a GET call with '/jobs/info' to get a list of all jobs
return hybrik_api.call_api('GET', '/jobs/info', { fields: [ 'id', 'name','progress', 'status','start_time','end_time'], sort_field: 'id', order: 'desc' })
.then(function (response) {
// the response is an array of job objects
var numberOfJobs = response.items.length;
console.log('Number of Jobs: ' + numberOfJobs);
for (var i = 0; i< numberOfJobs; i++) {
console.log('ID: ' + response.items[i].id + ' Name: '+ response.items[i].name + ' Progress: ' + response.items[i].progress + ' Status: ' + response.items[i].status + ' Start: ' + response.items[i].start_time + ' End: ' + response.items[i].end_time);
}
return true;
})
.catch(function (err) {
// any error - let it be in the request/network etc. or as a result of the Hybrik API operation, goes here.
shared.print_error(err);
});
})
.catch(function (err) {
// any error - let it be in the request/network etc. or as a result of the Hybrik API operation, goes here.
shared.print_error(err);
});
}
This example will display all the jobs currently in the system. By changing the "fields" array you can change which columns of data are returned.
Delete All Jobs
delete_all_jobs.js
// import api config. Please edit api_config.js first before running this example.
var api_config = require('./api_config.js');
// the Hybrik Connector lib for submitting POST/GET/DELETE/PUT commands to Hybrik
var HybrikAPI = require('./hybrik_connector.js');
// import shared helpers
var shared = require('./shared.js');
// construct an API access object using your Hybrik account info
var hybrik_api = new HybrikAPI(
api_config.HYBRIK_URL,
api_config.HYBRIK_COMPLIANCE_DATE,
api_config.HYBRIK_OAPI_KEY,
api_config.HYBRIK_OAPI_SECRET,
api_config.HYBRIK_AUTH_KEY,
api_config.HYBRIK_AUTH_SECRET
);
deleteAllJobs();
// and here is an example that only deletes the completed jobs:
// deleteCompletedJobs();
function delete_job_chunk(filters) {
//get a list of the jobs -- this returns a max of 1000 jobs, so as long as jobs are returned, call the delete recursively
var query_args = {
fields: ['id'],
order: 'asc'
};
if (filters) query_args.filters = filters;
return hybrik_api.call_api('GET', '/jobs/info', query_args).then(function (response) {
numberOfJobs = response.items.length;
if (numberOfJobs == 0) {
console.log('No more jobs available to delete');
return true;
}
var jobIDs = [];
for (var i = 0; i < numberOfJobs; i++) {
jobIDs[i] = response.items[i].id;
}
console.log('Deleting ' + numberOfJobs +' jobs...');
//delete the jobs
return hybrik_api.call_api('DELETE', '/jobs', null, {ids: jobIDs}).then(function (response) {
numberOfJobs = response.items.length;
console.log('Number of jobs successfully deleted: ' + numberOfJobs);
//call delete_job_chunks() again
return delete_job_chunk(filters);
}).catch(function (err) {
// any error - let it be in the request/network etc. or as a result of the Hybrik API operation, goes here.
shared.print_error(err);
});
}).catch(function (err) {
// any error - let it be in the request/network etc. or as a result of the Hybrik API operation, goes here.
shared.print_error(err);
});
}
function deleteAllJobs() {
//connect to the Hybrik API
return hybrik_api.connect().then(function () {
return delete_job_chunk();
})
.catch(function (err) {
// any error - let it be in the request/network etc. or as a result of the Hybrik API operation, goes here.
shared.print_error(err);
});
}
// same function as above, but only deletes the jobs marked as 'completed'
function deleteCompletedJobs() {
return hybrik_api.connect().then(function () {
return delete_job_chunk([
{
field: 'status',
values: ['completed']
}
]);
})
.catch(function (err) {
// any error - let it be in the request/network etc. or as a result of the Hybrik API operation, goes here.
shared.print_error(err);
});
}
This example will delete all the jobs currently in the system. There is also an example function to delete only the completed jobs.
Hybrik Connector
hybrik_connector.js
var requestp = require('request-promise');
var Promise = require('bluebird');
function HybrikAPI(api_url, compliance_date, oapi_key, oapi_secret, user_key, user_secret) {
if (!api_url || api_url.indexOf('http') != 0 || api_url.indexOf(':') < 0 || api_url.indexOf('//') < 0)
throw new Error('HybrikAPI requires a valid API url');
if (!user_key)
throw new Error('HybrikAPI requires a user_key');
if (!user_secret)
throw new Error('HybrikAPI requires a user_secret');
if (!compliance_date || !/^\d{8}$/.test(compliance_date))
throw new Error('HybrikAPI requires a compliance date in "YYYYMMDD" format.');
this.user_key = user_key;
this.user_secret = user_secret;
this.compliance_date = compliance_date;
var url_parts = api_url.split('//');
if (url_parts.length != 2)
throw new Error('HybrikAPI requires a valid API url');
this.oapi_url = url_parts[0] + '//' + oapi_key + ':' + oapi_secret + '@' + url_parts[1];
if (this.oapi_url[this.oapi_url.length - 2] == '/')
this.oapi_url = this.oapi_url.substring(0, this.oapi_url.length - 1);
}
HybrikAPI.prototype.connect = function () {
var self = this;
return requestp({
uri : self.oapi_url + '/login',
method : 'POST',
qs: {
auth_key: self.user_key,
auth_secret: self.user_secret
},
headers: {
'X-Hybrik-Compliance': self.compliance_date
}
})
.then(function (response) {
self.login_data = JSON.parse(response);
return Promise.resolve(true);
})
}
HybrikAPI.prototype.call_api = function (http_method, api_method, url_params, body_params) {
var self = this;
var request_options = {
uri : self.oapi_url + (api_method[0] === '/' ? api_method : api_method.substring(1)),
method : http_method,
headers: {
'X-Hybrik-Sapiauth': self.login_data.token,
'X-Hybrik-Compliance': self.compliance_date
}
};
if (url_params) {
request_options.qs = url_params;
}
if (body_params) {
request_options.headers['Content-Type'] = 'application/json';
request_options.body = JSON.stringify(body_params);
}
return requestp(request_options)
.then(function (response) {
var resp_obj = JSON.parse(response)
return Promise.resolve(resp_obj);
})
}
module.exports = HybrikAPI;
The hybrik_connector.js example uses promises for its asynchronous functionality. The specific npm libraries used are "request-promise" and "bluebird". The HybrikAPI call expects the following parameters (all required):
Name | Type | Description |
---|---|---|
api_url | string | The URL for the Hybrik API entry. Depending on your Hybrik account, you may have a unique entry point for your company. |
compliance_date | string | The date you start using the API. This ensures that even if we change something, we will stay compatible with you. |
oapi_key | string | Your account OAPI Key. This authenticates you to interface with the API. |
oapi_secret | string | Your account OAPI Secret. |
user_key | string | Your API User account user name. |
user_secret | string | Your API User account password. |
API Sample JSON
Introduction
Hybrik uses JSON to describe Jobs. These examples show different types of jobs, including both simple and complex workflows.
Download all of the sample JSON files here:
Sample JSON files
Basic Transcode
example_1_simple.json
{
"name": "Hybrik API Example#1 - simple transcode",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/example1"
},
"targets": [
{
"file_pattern": "%s.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 640,
"height": 360,
"frame_rate": 23.976,
"bitrate_kb": 600
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
This example JSON shows how to transcode a single source file and put the result in a specified target location. The transcode parameters are set in the "targets" array and specify that the resulting video will have the following characteristics:
- container: mp4
- video codec: h.264
- height: 640 pixels
- width: 360 pixels
- framerate: 23.976 frames/sec
- video bitrate: 600kbps
- audio codec: HEAAC V2
- sample rate: 44.1kHz
- audio bitrate: 128kbps
Transcode - FFMPEG Arguments
example_2_simple.json
{
"name": "Hybrik API Example#2 - simple transcode, ffmpeg",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/copy/example2"
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"existing_files": "replace",
"ffmpeg_args": "-pix_fmt yuv420p -c:v libx264 -crf 22 -preset medium -s 640x360 -b:v 800k -acodec libfdk_aac -ar 44100 -ac 2 -b:a 128k -f mp4"
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
This example JSON shows how to transcode a single source file and put the result in a specified target location, but in this case it uses an ffmpeg command line arguments to define the transcode parameters. Some customers are already using ffmpeg in their workflow and therefore can use their ffmpeg settings directly in Hybrik.
- container: mp4
- video codec: h.264
- height: 640 pixels
- width: 360 pixels
- constant rate factor: 22
- framerate: 23.976 frames/sec
- video bitrate: 800kbps
- audio codec: AAC
- sample rate: 44.1kHz
- audio bitrate: 128kbps
Transcode->Copy->Notify
example_3_transcode_copy_notify.json
{
"name": "Hybrik API Example#3 - transcode, copy and notify",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/transcode/example3"
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 640,
"height": 360,
"frame_rate": 23.976,
"bitrate_kb": 600
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
}
]
}
},
{
"uid": "copy_task",
"kind": "copy",
"payload": {
"target": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/copy/example3"
},
"existing_files": "replace",
"file_pattern": "{source_name}"
}
}
},
{
"uid": "success_notify",
"kind": "notify",
"task": {
"retry_method": "fail"
},
"payload": {
"notify_method": "email",
"email": {
"recipients": "{account_owner_email}",
"subject": "Example 3 succeeded",
"body": "congratulation, you completed example 3"
}
}
},
{
"uid": "error_notify",
"kind": "notify",
"payload": {
"notify_method": "email",
"email": {
"recipients": "{account_owner_email}",
"subject": "Example 3 failed",
"body": "too bad, example 3 did not complete"
}
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
},
{
"from": [
{
"element": "transcode_task"
}
],
"to": {
"success": [
{
"element": "copy_task"
}
],
"error": [
{
"element": "error_notify"
}
]
}
},
{
"from": [
{
"element": "copy_task"
}
],
"to": {
"success": [
{
"element": "success_notify"
}
],
"error": [
{
"element": "error_notify"
}
]
}
}
]
}
}
This example JSON shows how to make a more complex workflow. The job specifies that the source file will be transcoded to one location, copied to a second location, and then notify someone via email upon completion of the copy. Both success and failure notifications are specified.
Analyze->QC->Transcode
example_4_analyze_qc_transcode.json
{
"name": "Hybrik API Example#4 - analyze, qc, transcode",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-public/sample_sources/sample1.mp4"
}
}
},
{
"uid": "analyze_task",
"kind": "analyze",
"payload": {
"general_properties": {
"enabled": true
},
"deep_properties": {
"video": {
"black": {
"enabled": true,
"is_optional": true,
"black_level": 0.03,
"duration_sec": 1
}
},
"audio": {
"levels": {
"enabled": true,
"is_optional": true
},
"silence": {
"enabled": true,
"is_optional": true,
"noise_db": -60,
"duration_sec": 1
}
}
}
}
},
{
"uid": "qc_task",
"kind": "qc",
"payload": {
"conditions": {
"pass": [
{
"condition": "general_properties.container.size_kb > 10000",
"message_pass": "PASS: Source file size is greater than 10 megabytes, size = {general_properties.container.size_kb} kb.",
"message_fail": "FAIL: Source file size is less than or equal to 10 megabytes, size = {general_properties.container.size_kb} kb."
},
{
"condition": "general_properties.video.width >= 1920",
"message_pass": "PASS: Video frame width is greater than or equal to 1920, width is {general_properties.video.width} pixels.",
"message_fail": "FAIL: Video frame width is less than 1920, width is {general_properties.video.width} pixels."
},
{
"condition": "general_properties.video.height >= 1080",
"message_pass": "PASS: Video frame height is greater than or equal to 1080, height is {general_properties.video.height} pixels.",
"message_fail": "FAIL: Video frame height is less than 1080, height is {general_properties.video.height} pixels."
},
{
"condition": "general_properties.audio.size() >= 1",
"message_pass": "PASS: Number of audio tracks is greater than or equal to 1, there are {general_properties.audio.size()} tracks",
"message_fail": "FAIL: There are no audio tracks."
},
{
"condition": "general_properties.audio[0].sample_rate >= 44100",
"message_pass": "PASS: Audio sample rate is greater than or equal to 44.1 kHz, sample rate is {general_properties.audio[0].sample_rate} Hz.",
"message_fail": "FAIL: Audio sample rate is less than 44.1 kHz, sample rate is {general_properties.audio[0].sample_rate} Hz."
},
{
"condition": "deep_properties.video.black.size() == 0 || deep_properties.video.black.max('duration') <= 5",
"message_pass": "PASS: Longest black segment is shorter than 5 seconds.",
"message_fail": "FAIL: At least one black segment is longer than or equal to 5 seconds."
},
{
"condition": "deep_properties.audio[0].levels.rms_level_db >= -60",
"message_pass": "PASS: Audio RMS level is higher than or equal to -60 dB, level is {deep_properties.audio[0].levels.rms_level_db} dB.",
"message_fail": "FAIL: Audio level is lower than -60dB, level is {deep_properties.audio[0].levels.rms_level_db} dB."
},
{
"condition": "deep_properties.audio[0].silence.size() == 0 || deep_properties.audio[0].silence.max('duration') <= 5",
"message_pass": "PASS: Longest audio silence is shorter than 5 seconds.",
"message_fail": "FAIL: At least one audio silence is longer than or equal to 5 seconds."
}
]
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/transcode/example4"
},
"targets": [
{
"file_pattern": "{source_basename}.mp4",
"existing_files": "replace",
"container": {
"kind": "mp4"
},
"video": {
"codec": "h264",
"width": 640,
"height": 360,
"frame_rate": 23.976,
"bitrate_kb": 600
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "analyze_task"
}
]
}
},
{
"from": [
{
"element": "analyze_task"
}
],
"to": {
"success": [
{
"element": "qc_task"
}
]
}
},
{
"from": [
{
"element": "qc_task"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
This example JSON shows how to make a more complex workflow incorporating both Analyze and QC tasks. The job specifies that the source file will first be analyzed for both audio and video properties. Then the QC task will be performed which will verify the following parameters:
- overall file size > 10kb
- at least one audio channel
- an audio sample rate > 11kHz and <= 96kHz
- an audio level >= -60db
- no periods of silence longer than 5 seconds
- a video width >= 320 pixels
- a video hight >= 240 pixels
- no periods of black video longer than 5 seconds
HLS Transcode
example_5_hls.json
{
"name": "Hybrik Example #5 - HLS encode",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/transcode/example5"
},
"manifests": [
{
"file_pattern": "{source_basename}.m3u8",
"kind": "hls",
"uid": "main_hls"
}
],
"options": {
"pipeline": {
"use_agressive_mt": true
}
},
"targets": [
{
"manifest_uids": [
"main_hls"
],
"file_pattern": "{source_basename}_Layer1.m3u8",
"existing_files": "replace",
"container": {
"kind": "hls",
"segment_duration": 10,
"segment_file_pattern": "{target_dir}/{target_basename}_%05d.ts"
},
"video": {
"codec": "h264",
"width": 256,
"height": 144,
"frame_rate": 23.976,
"bitrate_kb": 180
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 64
}
]
},
{
"manifest_uids": [
"main_hls"
],
"file_pattern": "{source_basename}_Layer2.m3u8",
"existing_files": "replace",
"container": {
"kind": "hls",
"segment_duration": 10,
"segment_file_pattern": "{target_dir}/{target_basename}_%05d.ts"
},
"video": {
"codec": "h264",
"width": 512,
"height": 288,
"frame_rate": 23.976,
"bitrate_kb": 640
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
},
{
"manifest_uids": [
"main_hls"
],
"file_pattern": "{source_basename}_Layer3.m3u8",
"existing_files": "replace",
"container": {
"kind": "hls",
"segment_duration": 10,
"segment_file_pattern": "{target_dir}/{target_basename}_%05d.ts"
},
"video": {
"codec": "h264",
"width": 768,
"height": 432,
"frame_rate": 23.976,
"bitrate_kb": 1400
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 192
}
]
},
{
"manifest_uids": [
"main_hls"
],
"file_pattern": "{source_basename}_Layer4.m3u8",
"existing_files": "replace",
"container": {
"kind": "hls",
"segment_duration": 10,
"segment_file_pattern": "{target_dir}/{target_basename}_%05d.ts"
},
"video": {
"codec": "h264",
"width": 1024,
"height": 576,
"frame_rate": 23.976,
"bitrate_kb": 1900
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 192
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
This example creates a four-layer HLS file. The parameters of the HLS file are:
- 4 layers of h.264 video
- 10 second segment length
- h.264 video
- HEAAC V2 audio
MPEG-DASH Transcode
example_6_dash.json
{
"name": "Hybrik Example #6 - DASH encode",
"payload": {
"elements": [
{
"uid": "source_file",
"kind": "source",
"payload": {
"kind": "asset_url",
"payload": {
"storage_provider": "s3",
"url": "s3://hybrik-examples/public/sources/sample1.mp4"
}
}
},
{
"uid": "transcode_task",
"kind": "transcode",
"payload": {
"location": {
"storage_provider": "s3",
"path": "s3://hybrik-examples/public/output/transcode/example6"
},
"manifests": [
{
"file_pattern": "{source_basename}.mpd",
"kind": "dash",
"uid": "main_dash"
}
],
"options": {
"pipeline": {
"use_agressive_mt": true
}
},
"targets": [
{
"manifest_uids": [
"main_dash"
],
"file_pattern": "{source_basename}_Layer1.mpd",
"existing_files": "replace",
"container": {
"kind": "dash-mp4",
"segment_duration": 10
},
"video": {
"codec": "h264",
"width": 256,
"height": 144,
"frame_rate": 23.976,
"bitrate_kb": 180
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 64
}
]
},
{
"manifest_uids": [
"main_dash"
],
"file_pattern": "{source_basename}_Layer2.mpd",
"existing_files": "replace",
"container": {
"kind": "dash-mp4",
"segment_duration": 10
},
"video": {
"codec": "h264",
"width": 512,
"height": 288,
"frame_rate": 23.976,
"bitrate_kb": 640
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 128
}
]
},
{
"manifest_uids": [
"main_dash"
],
"file_pattern": "{source_basename}_Layer3.mpd",
"existing_files": "replace",
"container": {
"kind": "dash-mp4",
"segment_duration": 10
},
"video": {
"codec": "h264",
"width": 768,
"height": 432,
"frame_rate": 23.976,
"bitrate_kb": 1400
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 192
}
]
},
{
"manifest_uids": [
"main_dash"
],
"file_pattern": "{source_basename}_Layer4.mpd",
"existing_files": "replace",
"container": {
"kind": "dash-mp4",
"segment_duration": 10
},
"video": {
"codec": "h264",
"width": 1024,
"height": 576,
"frame_rate": 23.976,
"bitrate_kb": 1900
},
"audio": [
{
"codec": "heaac_v2",
"channels": 2,
"sample_rate": 44100,
"bitrate_kb": 192
}
]
}
]
}
}
],
"connections": [
{
"from": [
{
"element": "source_file"
}
],
"to": {
"success": [
{
"element": "transcode_task"
}
]
}
}
]
}
}
This example creates a 4 layer MPEG-DASH file. The parameters of the MPEG-DASH file are:
- 4 layers of h.264 video
- 10 second segment length
- h.264 video
- HEAAC V2 audio
Legal Notice
Hybrik API documentation
Revision: 1.1
Creation: 2019-04-01
This API document explains the basic use of the Hybrik API for Job creation and management.
(c) 2021, Dolby Laboratories, all rights reserved
THIS API SPECIFICATION IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE API OR THE USE OR OTHER DEALINGS IN THE API.