Publishing Point with Mixing Source
This use-case demonstrates how to create a mixing publishing point that composites multiple input sources into a single output stream with transcoding and adaptive bitrate. The MixingSource decodes incoming streams, applies video and audio mixing, and re-encodes the output into multiple quality tracks.
Overview
The createMixingPublishingPoint1() method performs the following:
- Creates a MixingSource connected to the streaming server
- Configures encoder/decoder parameters for the target platform
- Defines the mixing configuration via ApiPublishingPointRequest — input sources, video/audio mixing mode, and output tracks
- Creates the publishing point with an inactivity timeout
Creating the Source
VAST.Network.MixingSource source = new VAST.Network.MixingSource(this.server);
MixingSource takes the streaming server as a constructor parameter. It uses the server to resolve internal source URIs (such as vast://publishing-point/... references to other publishing points).
Alternatively, MixingSource can be used instead of MixingSource if you want to control the mixing pipeline completely from code, without the built-in JSON API, and if you don't need the ability to use other publishing points as sources.
Platform-Specific Encoder Configuration
On Linux, the encoder and decoder framework must be configured explicitly:
if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux))
{
bool useFFmpeg = true;
source.Parameters.VideoDecoderParameters.PreferredMediaFramework =
useFFmpeg ? VAST.Common.MediaFramework.FFmpeg : VAST.Common.MediaFramework.CUDA;
source.Parameters.VideoDecoderParameters.AllowHardwareAcceleration = true;
source.Parameters.VideoEncoderParameters.PreferredMediaFramework =
useFFmpeg ? VAST.Common.MediaFramework.FFmpeg : VAST.Common.MediaFramework.CUDA;
source.Parameters.VideoEncoderParameters.AllowHardwareAcceleration = true;
source.Parameters.AudioDecoderParameters.PreferredMediaFramework =
VAST.Common.MediaFramework.FFmpeg;
source.Parameters.AudioDecoderParameters.AllowHardwareAcceleration = false;
source.Parameters.AudioEncoderParameters.PreferredMediaFramework =
VAST.Common.MediaFramework.FFmpeg;
source.Parameters.AudioEncoderParameters.AllowHardwareAcceleration = false;
}
The Parameters property exposes separate encoder and decoder settings for video and audio. On Windows, the built-in Media Foundation framework is used by default. On Linux, set the framework to FFmpeg or CUDA (for NVIDIA hardware acceleration).
Configuring the Mixing Pipeline
The mixing configuration is passed as an ApiPublishingPointRequest object via the Update() method:
source.Update(new VAST.Network.ApiPublishingPointRequest
{
Path = "mixing",
StreamingMode = VAST.Common.StreamingMode.Live,
AllowVideoProcessing = false,
NoSamplesTimeoutMs = 500,
Sources = new List<VAST.Network.ApiSource>
(
new VAST.Network.ApiSource[]
{
new VAST.Network.ApiSource { Uri = "vast://publishing-point/ingest" },
}
),
Processing = new VAST.Network.ApiProcessing { ... }
});
Key Properties
| Property | Description |
|---|---|
Path |
Publishing point path |
StreamingMode |
Live for continuous streaming |
AllowVideoProcessing |
false disables the image processing pipeline (overlays, transformations) |
NoSamplesTimeoutMs |
Timeout in milliseconds before switching to a fallback source or black screen when no media samples are received |
Sources |
List of input sources to mix |
Input Sources
Sources = new List<VAST.Network.ApiSource>
(
new VAST.Network.ApiSource[]
{
new VAST.Network.ApiSource { Uri = "vast://publishing-point/ingest" },
}
)
Each ApiSource specifies an input. The vast://publishing-point/ URI scheme references another publishing point on the same server — in this case, an ingest publishing point that accepts incoming publisher connections. Standard URIs (rtsp://, rtmp://, etc.) can also be used for external sources.
Video Processing
VideoProcessing = new VAST.Network.ApiVideoProcessing
{
Discard = false,
Transcoding = true,
Mixing = new VAST.Network.ApiVideoMixing
{
Type = VAST.Image.Mixing.VideoMixingType.Single,
SourceIndex = 0,
},
Tracks = new List<VAST.Network.ApiVideoTrack>
(
new VAST.Network.ApiVideoTrack[]
{
new VAST.Network.ApiVideoTrack
{
Index = 0,
Width = 1280, Height = 720,
Framerate = new VAST.Common.Rational(30),
KeyframeInterval = 30,
Bitrate = 3000000,
Codec = VAST.Common.Codec.H264,
Profile = 0x100 | 66,
},
new VAST.Network.ApiVideoTrack
{
Index = 1,
Width = 848, Height = 480,
Framerate = new VAST.Common.Rational(30),
KeyframeInterval = 30,
Bitrate = 1500000,
Codec = VAST.Common.Codec.H264,
Profile = 0x100 | 66,
},
new VAST.Network.ApiVideoTrack
{
Index = 2,
Width = 416, Height = 240,
Framerate = new VAST.Common.Rational(30),
KeyframeInterval = 30,
Bitrate = 400000,
Codec = VAST.Common.Codec.H264,
Profile = 0x100 | 66,
},
}
)
}
Video Mixing Mode
The Mixing property controls how input sources are composited:
| VideoMixingType | Description |
|---|---|
Single |
Use video from a single source (specified by SourceIndex); must be used when AllowVideoProcessing is false |
All |
Composite all sources into one frame using layer positions |
Loop |
Cycle through sources, switching when the current one finishes |
In this sample, Single mode with SourceIndex = 0 means the video from the first source is used directly.
Adaptive Bitrate Tracks
The Tracks list defines multiple output quality levels. Each track is encoded independently with its own resolution and bitrate:
| Track | Resolution | Bitrate |
|---|---|---|
| 0 | 1280x720 | 3 Mbps |
| 1 | 848x480 | 1.5 Mbps |
| 2 | 416x240 | 400 Kbps |
When served via HLS or MPEG-DASH, clients receive a multi-bitrate manifest and can switch between quality levels based on available bandwidth.
Profile Value
The Profile value 0x100 | 66 enforces the H.264 Constrained Baseline profile (profile IDC 66 with the constrained flag 0x100). This ensures maximum compatibility with players and devices.
Audio Processing
AudioProcessing = new VAST.Network.ApiAudioProcessing
{
Discard = false,
Transcoding = true,
Mixing = new VAST.Network.ApiAudioMixing
{
Type = VAST.Image.Mixing.AudioMixingType.Single,
SourceIndex = 0,
},
Tracks = new List<VAST.Network.ApiAudioTrack>
(
new VAST.Network.ApiAudioTrack[]
{
new VAST.Network.ApiAudioTrack
{
Index = 3,
SampleRate = 44100,
Channels = 2,
Bitrate = 128000,
Codec = VAST.Common.Codec.AAC,
}
}
)
}
Audio uses the same Single mixing mode — audio from source 0 is taken directly. A single AAC output track is defined at 44.1 kHz stereo, 128 kbps. The track index (3) continues after the video track indices (0, 1, 2).
Audio Mixing Modes
| AudioMixingType | Description |
|---|---|
Single |
Use audio from a single source (specified by SourceIndex); must be used when AllowVideoProcessing is false |
All |
Mix audio from all sources together |
Loop |
Cycle through sources, switching when the current one finishes |
Video Processing with Compositing
When AllowVideoProcessing is set to true, the mixing source enables the full image processing pipeline — sources can be positioned, scaled, layered, and composited into a single output frame. Video mixing mode must be set to All, and layers define how sources are arranged.
The following example composites two sources as a picture-in-picture layout with mixed audio:
source.Update(new VAST.Network.ApiPublishingPointRequest
{
Path = "mixing",
StreamingMode = VAST.Common.StreamingMode.Live,
AllowVideoProcessing = true,
NoSamplesTimeoutMs = 500,
Sources = new List<VAST.Network.ApiSource>
(
new VAST.Network.ApiSource[]
{
new VAST.Network.ApiSource { Uri = "vast://publishing-point/camera1" },
new VAST.Network.ApiSource { Uri = "vast://publishing-point/camera2" },
}
),
Processing = new VAST.Network.ApiProcessing
{
VideoProcessing = new VAST.Network.ApiVideoProcessing
{
Discard = false,
Transcoding = true,
Mixing = new VAST.Network.ApiVideoMixing
{
Type = VAST.Image.Mixing.VideoMixingType.All,
Layers = new List<VAST.Network.ApiLayer>
(
new VAST.Network.ApiLayer[]
{
new VAST.Network.ApiLayer
{
Sources = new List<int> { 0 },
Location = new VAST.Common.Rect(0, 0, 1280, 720),
},
new VAST.Network.ApiLayer
{
Sources = new List<int> { 1 },
Location = new VAST.Common.Rect(880, 20, 1260, 234),
Opacity = 0.9f,
FadeInMsec = 500,
},
}
)
},
Tracks = new List<VAST.Network.ApiVideoTrack>
(
new VAST.Network.ApiVideoTrack[]
{
new VAST.Network.ApiVideoTrack
{
Index = 0,
Width = 1280, Height = 720,
Framerate = new VAST.Common.Rational(30),
KeyframeInterval = 30,
Bitrate = 3000000,
Codec = VAST.Common.Codec.H264,
},
}
)
},
AudioProcessing = new VAST.Network.ApiAudioProcessing
{
Discard = false,
Transcoding = true,
Mixing = new VAST.Network.ApiAudioMixing
{
Type = VAST.Image.Mixing.AudioMixingType.All,
Filters = new List<VAST.Network.ApiFilter>
(
new VAST.Network.ApiFilter[]
{
new VAST.Network.ApiFilter
{
Sources = new List<int> { 0 },
Volume = 1.0f,
},
new VAST.Network.ApiFilter
{
Sources = new List<int> { 1 },
Volume = 0.5f,
},
}
)
},
Tracks = new List<VAST.Network.ApiAudioTrack>
(
new VAST.Network.ApiAudioTrack[]
{
new VAST.Network.ApiAudioTrack
{
Index = 1,
SampleRate = 44100,
Channels = 2,
Bitrate = 128000,
Codec = VAST.Common.Codec.AAC,
}
}
)
}
}
});
Video Layers
When VideoMixingType.All is used, the Layers list defines how sources are composited. Each ApiLayer specifies which sources appear on that layer and where they are positioned. Layers are rendered in order — later layers appear on top.
In this example, source 0 fills the entire 1280x720 frame as the background, and source 1 is placed as a smaller picture-in-picture overlay in the top-right corner with 90% opacity and a 500 ms fade-in.
| Property | Type | Description |
|---|---|---|
Sources |
List<int> |
Source indices to show on this layer |
Location |
Rect |
Position and size as left, top, right, bottom pixel values |
Opacity |
float? |
Layer opacity (1.0 = opaque, 0.0 = transparent) |
Rotation |
int? |
Rotation angle in degrees |
Crop |
Rect |
Crop region applied to the source before compositing |
Stretch |
StretchType |
How source content is scaled to fit the layer area |
FadeInMsec |
int? |
Fade-in duration in milliseconds |
FadeOutMsec |
int? |
Fade-out duration in milliseconds |
ChromaKeyColor |
string |
Hex color for chroma key (e.g., "#FF00FF00" for green screen) |
Audio Filters
When AudioMixingType.All is used, the Filters list controls the volume of each source in the mix. Each ApiFilter specifies which sources it applies to and the volume level. When Filters is null, all sources are mixed at full volume.
In this example, source 0 plays at full volume and source 1 at 50% volume.
| Property | Type | Description |
|---|---|---|
Sources |
List<int> |
Source indices this filter applies to |
Volume |
float? |
Volume level (1.0 = full, 0.0 = muted) |
FadeInMsec |
int? |
Audio fade-in duration in milliseconds |
FadeOutMsec |
int? |
Audio fade-out duration in milliseconds |
Creating the Publishing Point
this.server.CreatePublishingPoint("mixing", source,
new VAST.Network.PublishingPointParameters
{
InactivityTimeout = new TimeSpan(1, 0, 0)
});
The InactivityTimeout keeps the publishing point alive for 1 hour after the last received publishing point update via JSON API.
Platform Support
This use-case requires the VAST_FEATURE_MIXING compilation symbol. Mixing depends on the VAST.Image library for video compositing and the VAST.Codec library for transcoding (when FFmpeg or CUDA is chosen). On Linux, FFmpeg or NVIDIA CUDA libraries are required.
See Also
- Sample Applications — overview of all demo projects
- .NET Server Demo — parent page with setup instructions, license key configuration, and access URL reference
- Multi-Protocol Server — overview and full publishing point table
- Initialization — server creation and protocol configuration
- Server Event Handlers — authorization, connections, and error handling
- VAST.Network Library — StreamingServer API reference