VCS SDK Core concepts

Before reviewing the main core concepts for VCS, lets first distinguish between the baseline composition and the SDK.

If you already know the difference between them, jump to Core concepts.

VCS Baseline composition vs. SDK

Daily's baseline composition

The baseline composition is VCS's default collection of layout and graphics features that work together and are easily accessible with a unified JSON-based control interface. It allows developers to add custom layouts and graphics via daily-js and react-native-daily-js's live streaming and recording instance methods.

In other words, you can pass a JSON blob of composition properties in the startLiveStreaming() or startRecording() Daily instance methods, and VCS will apply those settings to your video feed.

Read our blog tutorial on using the baseline composition for more information.


The VCS SDK is the open sourced version of Daily's baseline composition. It's better suited for anyone who wants to customize their live streams or recordings beyond the options made available via the baseline composition. For example, our tutorial on how to add a moving watermark to your live stream is an example of something you add with the VCS SDK, which is not available through the baseline composition.

Instead of passing a JSON blob, you can clone the VCS SDK GitHub repo, edit components, and then pass your custom components to the Daily instance method you're using. VCS will swap in your custom components and apply those to the video feed instead.

Let's not get ahead of ourselves, though, because the VCS SDK requires some background information to fully understand how it works. Now that we know what the VCS SDK is, we can take a look at its core concepts.

Core concepts

Daily's Video Component System (VCS) is a video composition SDK build by Daily for Daily live streaming and cloud recording. VCS is a React-based system for embeddable cross-platform real-time video compositions. Quite a mouthful! Let's expand on each of these points and the main goals of VCS.

VCS is compatible with any frontend framework.

Even though VCS is React-based, your app using VCS for Daily live streaming or cloud recording does not need to be built with React.

VCS-based cloud rendering is achieved through the Daily API. This means the VCS code you run in the cloud is by default entirely independent of how your client application is structured. In other words, your web app could be written with Vue, Angular, or any other framework and still be compatible with VCS.

Video composition

In VCS, a video composition is a JavaScript program using the React runtime. The program receives input in specific formats that describe your live video inputs, images, and control data. Based on this input, the composition renders a tree of video and graphics elements, which the VCS framework then further transforms into an optimized representation that gets turned into final video pixels by a native pipeline. In other words, the composition is a long-running program that decides what to render at each frame, but doesn't access or modify video frame data directly.


Live video usually runs at 30fps and missed frames are often noticeable, so VCS has a strict performance constraint. Frames must be delivered quickly. Although JavaScript engines are fast nowadays, they're definitely not so fast that you'd want to execute every aspect of video compositing in pure JavaScript! VCS is designed so that only the parts of the compositing pipeline that benefit from programmability and composability are implemented in JavaScript and React. With any parts, native capabilities are harnessed for the heavy lifting of graphics and video rendering. In other words, all VCS components are “real-time-friendly".


VCS compositions are programs but they're not standalone applications. Typically VCS provides compositing capabilities to a host application. The VCS composition executes in its own JavaScript sandbox that doesn't have access to the rest of the host.

You may already be familiar with using the VCS baseline composition via the Daily API in live streaming and recording. When you're sending composition param values over this Daily API, you're working on the outside of the embedding boundary. The baseline composition executes on Daily's cloud embedded inside a media processing pipeline, but the only way to communicate with this composition is through its params API.

The VCS SDK is the open source version of the Daily baseline composition code. Think of using the VCS SDK as being able to edit and update the baseline composition to match your exact live streaming and recording layout requirements. To see what is available via the baseline composition, try out Daily's VCS Simulator, which contains an interface for toggling or setting every composition param available through the baseline composition.

As a VCS SDK developer, you'll now be working on the inside of the embedding boundary. You get to define the params API in a way that makes sense for your composition. You get access to direct programmability that would be hard or impossible to express in terms of a plain JSON API. Similarly, you also need to be aware of the boundary. That is to say: the host application puts your code in a sandbox for a reason and you need to work within what's available.

The VCS SDK provides a tool called the VCS Simulator which loads your custom composition into a standard browser-based GUI. The VCS Simulator is like a generic host application that lets you click buttons to send data to your composition and see how its output changes. When your code works in the simulator, we do our best to guarantee that it will produce the same output on Daily's cloud pipeline as well.

Note: The VCS Simulator will reflect whatever local changes have been made. It is the editable version of the VCS Simulator Daily hosts as a tool for developers working with the baseline composition.


The previous paragraph implicitly mentioned two quite different platforms already: the VCS Simulator runs locally in your browser, while Daily's server-side pipeline runs somewhere in the cloud. The same VCS composition can be loaded in both contexts.

This is a core promise of VCS — it's an abstraction for programmable video composition that scales to deliver real-time video performance on almost any platform. VCS makes a special effort to optimize its rendering pipeline for the data types and color spaces used in video content. This is unlike browser engines or most tools outside of very specialized live video apps. We currently offer the browser target and the Daily server pipeline, but the VCS paradigm is also compatible with mobile operating systems and GPU-accelerated graphics.

Though it is not currently available, the composition you make in VCS today could eventually be running inside an iOS app with full acceleration and native graphics performance. We're currently working on this, so keep an eye out for more announcements.


We chose React for VCS because it's a popular runtime with a powerful abstraction that's independent of browser APIs. The success of React Native has shown that React is very suitable for non-web application contexts. VCS takes many lessons from React Native and applies them to dynamic video compositing. For example, VCS has specific component types defined, similar to how React Native has a <View> component that is used similar to how a <div> element is used in a browser.

Read our VCS best practices page for more information on building VCS React components.


These features also have some important consequences for how the framework is implemented and how it's best used.

The set of components and styles available in VCS is necessarily limited compared to HTML and CSS. We need to keep everything real-time-friendly and compatible with the performance constraints of scalable server-side rendering, meaning it's simply not possible to provide the enormous range of capabilities found in CSS today.

Even though many of the style options in VCS look similar to CSS, available options are curated for the most common styles required to customize video feed layouts. Review the style properties included in the description of each VCS component to learn more.

You also need to be aware of some requirements around how you structure your composition to be compatible with the embedding and output optimization.

As a next step, let's consider how your mental model of React may need to adjust a little for video rendering in our VCS best practices page.

Next, continue reading our additional Core concepts pages:

Suggested blog posts