Publishing Point with Camera/Mic Capture Source
This use-case demonstrates how to capture video from a local camera and audio from a microphone, encode both streams, and serve them as a publishing point. Device enumeration is asynchronous, and the two capture sources are combined using AggregatedNetworkSource.
Overview
The createCaptureSource() method is an async void method that performs the following:
- Creates an AggregatedNetworkSource to combine video and audio
- Enumerates available video capture devices and creates a video capture source from the first device
- Enumerates available audio capture devices and creates an audio capture source from the first device
- Configures encoding parameters for both streams
- Creates the publishing point
The method is called from a background task on supported platforms:
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows)
|| RuntimeInformation.IsOSPlatform(OSPlatform.OSX))
{
Task.Run(() => createCaptureSource());
}
Video Capture
Enumerating Devices
List<VAST.Capture.VideoCaptureDeviceDescriptor> videoDevices =
await VAST.Capture.VideoDeviceEnumerator.Enumerate();
VideoDeviceEnumerator returns a list of available video capture devices. Each VideoCaptureDeviceDescriptor contains the device ID, name, and a list of supported capture modes.
Selecting a Capture Mode
VAST.Capture.VideoCaptureDeviceDescriptor device = videoDevices[0];
VAST.Capture.VideoCaptureMode captureMode = null;
if (device.CaptureModes != null && device.CaptureModes.Count > 0)
{
captureMode = (device.DefaultCaptureMode >= 0)
? device.CaptureModes[device.DefaultCaptureMode]
: device.CaptureModes[0];
}
The sample uses the first available device and its default capture mode. If no default is set (DefaultCaptureMode is -1), the first available mode is used. Each capture mode specifies resolution and framerate supported by the device.
Creating the Video Source
VAST.Capture.IVideoCaptureSource2 videoSource =
VAST.Media.SourceFactory.CreateVideoCapture(device.DeviceId, captureMode);
int width = (captureMode != null) ? captureMode.Width : 1280;
int height = (captureMode != null) ? captureMode.Height : 720;
VAST.Common.Rational framerate =
new VAST.Common.Rational((captureMode != null) ? captureMode.Framerate : 30.0);
SourceFactory creates the platform-appropriate video capture source. Resolution and framerate are taken from the capture mode when available, with 1280x720 at 30 fps as defaults.
Configuring Video Encoding
VAST.Common.MediaType mt = new VAST.Common.MediaType
{
ContentType = VAST.Common.ContentType.Video,
CodecId = VAST.Common.Codec.H264,
Bitrate = width * height * 3,
Width = width,
Height = height,
Framerate = framerate,
};
mt.Metadata.Add("KeyframeInterval", "30");
mt.Metadata.Add("Profile", "66");
await videoSource.SetDesiredOutputType(0, mt);
The capture source handles encoding internally. SetDesiredOutputType configures the H.264 encoder with a keyframe interval of 30 frames and Baseline profile (66). By default, the built-in framework is used for encoding. To use FFmpeg instead:
videoSource.EncoderParameters = new VAST.Media.EncoderParameters
{
PreferredMediaFramework = VAST.Common.MediaFramework.FFmpeg,
AllowHardwareAcceleration = false,
};
Audio Capture
Enumerating Devices
List<VAST.Capture.AudioCaptureDeviceDescriptor> audioDevices =
await VAST.Capture.AudioDeviceEnumerator.Enumerate();
AudioDeviceEnumerator returns a list of available audio capture devices (microphones, line-in, etc.).
Creating the Audio Source
VAST.Capture.AudioCaptureDeviceDescriptor device = audioDevices[0];
VAST.Capture.AudioCaptureMode captureMode = null;
if (device.CaptureModes != null && device.CaptureModes.Count > 0)
{
captureMode = (device.DefaultCaptureMode >= 0)
? device.CaptureModes[device.DefaultCaptureMode]
: device.CaptureModes[0];
}
VAST.Capture.IAudioCaptureSource2 audioSource =
VAST.Media.SourceFactory.CreateAudioCapture(device.DeviceId, captureMode);
Configuring Audio Encoding
VAST.Common.MediaType mt = new VAST.Common.MediaType
{
ContentType = VAST.Common.ContentType.Audio,
CodecId = VAST.Common.Codec.AAC,
SampleRate = 44100,
Channels = 2,
Bitrate = 128000,
};
await audioSource.SetDesiredOutputType(0, mt);
Audio is encoded to AAC at 44.1 kHz stereo, 128 kbps. The EncoderParameters property can be set to switch to FFmpeg encoding, same as for video.
Combining Sources
VAST.Network.AggregatedNetworkSource source = new VAST.Network.AggregatedNetworkSource();
// ... create video and audio sources ...
source.AddSource(videoSource);
source.AddSource(audioSource);
this.server.CreatePublishingPoint("capture", source);
Since video and audio come from separate capture devices, they are combined using AggregatedNetworkSource — the same approach used in Push Source 2. If no video or audio devices are available, the corresponding source is simply not added.
Error Handling
The entire method is wrapped in a try/catch block. If the capture library is not present on the system (e.g., the platform does not support capture), the exception is caught and the publishing point is not created. The AggregatedNetworkSource is disposed in the finally block if the publishing point was not successfully created.
Platform Support
Camera and microphone capture is supported on Windows and macOS. The sample guards the call with a platform check before invoking createCaptureSource().
See Also
- Sample Applications — overview of all demo projects
- .NET Server Demo — parent page with setup instructions, license key configuration, and access URL reference
- Multi-Protocol Server — overview and full publishing point table
- Initialization — server creation and protocol configuration
- Server Event Handlers — authorization, connections, and error handling
- VAST.Network Library — StreamingServer API reference