VCS SDK Core concepts
Before reviewing the main core concepts for VCS, lets first distinguish between the baseline composition and the SDK.
If you already know the difference between them, jump to Core concepts.
The baseline composition is VCS's default collection of layout and graphics features that work together and are easily accessible with a unified JSON-based control interface. It allows developers to add custom layouts and graphics via
react-native-daily-js's live streaming and recording instance methods.
The VCS SDK is the open sourced version of Daily's baseline composition. It's better suited for anyone who wants to customize their live streams or recordings beyond the options made available via the baseline composition. For example, our tutorial on how to add a moving watermark to your live stream is an example of something you add with the VCS SDK, which is not available through the baseline composition.
Instead of passing a JSON blob, you can clone the VCS SDK GitHub repo, edit components, and then pass your custom components to the Daily instance method you're using. VCS will swap in your custom components and apply those to the video feed instead.
Let's not get ahead of ourselves, though, because the VCS SDK requires some background information to fully understand how it works. Now that we know what the VCS SDK is, we can take a look at its core concepts.
Daily's Video Component System (VCS) is a video composition SDK build by Daily for Daily live streaming and cloud recording. VCS is a React-based system for embeddable cross-platform real-time video compositions. Quite a mouthful! Let's expand on each of these points and the main goals of VCS.
VCS is compatible with any frontend framework.
Even though VCS is React-based, your app using VCS for Daily live streaming or cloud recording does not need to be built with React.
VCS-based cloud rendering is achieved through the Daily API. This means the VCS code you run in the cloud is by default entirely independent of how your client application is structured. In other words, your web app could be written with Vue, Angular, or any other framework and still be compatible with VCS.
You may already be familiar with using the VCS baseline composition via the Daily API in live streaming and recording. When you're sending composition param values over this Daily API, you're working on the outside of the embedding boundary. The baseline composition executes on Daily's cloud embedded inside a media processing pipeline, but the only way to communicate with this composition is through its params API.
As a VCS SDK developer, you'll now be working on the inside of the embedding boundary. You get to define the params API in a way that makes sense for your composition. You get access to direct programmability that would be hard or impossible to express in terms of a plain JSON API. Similarly, you also need to be aware of the boundary. That is to say: the host application puts your code in a sandbox for a reason and you need to work within what's available.
The VCS SDK provides a tool called the VCS Simulator which loads your custom composition into a standard browser-based GUI. The VCS Simulator is like a generic host application that lets you click buttons to send data to your composition and see how its output changes. When your code works in the simulator, we do our best to guarantee that it will produce the same output on Daily's cloud pipeline as well.
The previous paragraph implicitly mentioned two quite different platforms already: the VCS Simulator runs locally in your browser, while Daily's server-side pipeline runs somewhere in the cloud. The same VCS composition can be loaded in both contexts.
This is a core promise of VCS — it's an abstraction for programmable video composition that scales to deliver real-time video performance on almost any platform. VCS makes a special effort to optimize its rendering pipeline for the data types and color spaces used in video content. This is unlike browser engines or most tools outside of very specialized live video apps. We currently offer the browser target and the Daily server pipeline, but the VCS paradigm is also compatible with mobile operating systems and GPU-accelerated graphics.
We chose React for VCS because it's a popular runtime with a powerful abstraction that's independent of browser APIs. The success of React Native has shown that React is very suitable for non-web application contexts. VCS takes many lessons from React Native and applies them to dynamic video compositing. For example, VCS has specific component types defined, similar to how React Native has a
<View> component that is used similar to how a
<div> element is used in a browser.
These features also have some important consequences for how the framework is implemented and how it's best used.
The set of components and styles available in VCS is necessarily limited compared to HTML and CSS. We need to keep everything real-time-friendly and compatible with the performance constraints of scalable server-side rendering, meaning it's simply not possible to provide the enormous range of capabilities found in CSS today.
Even though many of the style options in VCS look similar to CSS, available options are curated for the most common styles required to customize video feed layouts. Review the style properties included in the description of each VCS component to learn more.
You also need to be aware of some requirements around how you structure your composition to be compatible with the embedding and output optimization.
As a next step, let's consider how your mental model of React may need to adjust a little for video rendering in our VCS best practices page.
Next, continue reading our additional
Core concepts pages: