September 07, 2023
Developing real-time engagement applications where users interact with each other using live audio, video, and text is a really complex challenge. It requires a lot of time and effort to build out the infrastructure and logic to support these features. The biggest challenge is to have your infrastructure be reliable, scalable, and low latency to deliver the best user experience.
At Agora, we’re solving this problem for developers at scale. Agora’s Software-Defined Real-Time Network™ provides the broadest range of coverage throughout the world (200+ countries), while delivering high-quality with ultra-low latency (400ms or less). To make leveraging the Agora platform easy for developers we offer easy to use SDKs for Android, iOS/macOS, Windows, Web, Electron, Flutter, React Native, Unity and more. With our SDKs you can build and deploy your real-time engagement application in a matter of hours instead of days.
Getting back to the topic for this blog, how does one build a video conferencing app with Agora and React? Agora recently announced a new beta SDK for React. We’ll look at how it works with a simple demo app.
Note: This blog does not implement token authentication, which is recommended for all RTE apps running in production environments. For more information about token-based authentication in the Agora platform, see this guide.
The source for this project is available on GitHub, you can also try out a live demo.
To follow along, scaffold a React project using Vite:
We’ll start in the App.tsx file. Since this demo is going to be really simple, we’ll create all our components in the same file. Let’s start by importing the dependencies we’ll use in our application.
import { useState } from "react";
import { AgoraRTCProvider, useJoin, useLocalCameraTrack, useLocalMicrophoneTrack, usePublish, useRTCClient, useRemoteAudioTracks, useRemoteUsers, RemoteUser, LocalVideoTrack } from "agora-rtc-react";
import AgoraRTC from "agora-rtc-sdk-ng";
import "./App.css";
The Agora React SDK provides a set of hooks and components to manage the state of your application and to render the video call interface.
In our App, let’s initialize a client object from the Agora SDK and pass it to the useRTCClient hook. The client object represents the local user in the video call. Passing the object to the useRTCClient hook makes it available to the rest of the application (and hooks) by using a React Provider. We’ll add this in a bit, first, let’s set up our application state:
const App = () => {
const client = useRTCClient(AgoraRTC.createClient({ codec: "vp8", mode: "rtc" }));
const [channelName, setChannelName] = useState("test");
const [AppID, setAppID] = useState("");
const [token, setToken] = useState(null);
const [inCall, setInCall] = useState(false);
Next, we display the App component. In the return block, we’ll render an h1 element to display our heading. Now, based on the inCall state variable, we’ll display either a Form component to get details (App ID, channel name, and token) from the user or display the video call:
return (
<div style={styles.container}>
<h1>Agora React Videocall</h1>
{!inCall ? (
<Form
AppID={AppID}
setAppID={setAppID}
channelName={channelName}
setChannelName={setChannelName}
token={token}
setToken={setToken}
setInCall={setInCall}
/>
) : (
{/* Videocall here */}
)}
</div>
);
};
To create the video call component, let’s first wrap it with the AgoraRTCProvider component, this accepts a client returned from the useRTCClient hook and makes it accessible down the tree. You should add this at the top level of your video call.
We’ll create a <Videos> component next, to hold the users’ videos, passing it our props from before. We’ll also display an End Call button that ends the call by setting the inCall state to false:
return (
<div style={styles.container}>
<h1>Agora React Videocall</h1>
{!inCall ? (
<Form
AppID={AppID}
setAppID={setAppID}
channelName={channelName}
setChannelName={setChannelName}
token={token}
setToken={setToken}
setInCall={setInCall}
/>
) : (
<AgoraRTCProvider client={client}>
<Videos channelName={channelName} AppID={AppID} token={token} />
<button onClick={() => setInCall(false)}>End Call</button>
</AgoraRTCProvider>
)}
</div>
);
}
export default App;
We destructure the props to access the AppID, channelName and token.
The Agora React SDK also gives you useLocalMicrophoneTrack and useLocalCameraTrack hooks, these create and set up the local microphone and camera tracks respectively. Since the process to create these tracks is asynchronous they also give you a loading and an error state along with the tracks.
function Videos(props: { channelName: string; AppID: string; token: string }) {
const { AppID, channelName, token } = props;
const { isLoading: isLoadingMic, localMicrophoneTrack } = useLocalMicrophoneTrack();
const { isLoading: isLoadingCam, localCameraTrack } = useLocalCameraTrack();
We can use the useRemoteUsers hook to access the other (remote) users that join our video call. This hook gives you an array of objects, each object represents remote users in the call. The array is like your React state that gets updated each time someone joins or leaves the channel, we’ll use this to render our UI and keep it in sync with the form of the call:
const remoteUsers = useRemoteUsers();
We can use the usePublish hook to publish the local microphone and camera tracks. You can pass in an array of tracks you want to publish to the channel, these tracks can be subscribed and viewed by other users who join the same channel.
usePublish([localMicrophoneTrack, localCameraTrack]);
To start the call we need to join a room or a channel. We can do that by calling the useJoin hook and passing in the AppID, channelName, and token as props.
useJoin({
appid: AppID,
channel: channelName,
token: token === "" ? null : token,
});
We can access the remote users’ audio tracks with the useRemoteAudioTracks hook by providing it the remoteUsers array. This hook automatically handles subscribing and unsubscribing to the user tracks as your component is mounted and tracks are available.
const { audioTracks } = useRemoteAudioTracks(remoteUsers);
To listen to the remote users’ tracks, we can map over the audioTracks array and call the play method for each available track:
audioTracks.map((track) => track.play());
We’ll check if either the microphone or the camera is still loading and render a simple loading message:
const deviceLoading = isLoadingMic || isLoadingCam;
if (deviceLoading) return <div style={styles.grid}>Loading devices...</div>;
Once the tracks are ready, we can render a grid with videos of all the users in the channel. We can render the user’s own (local) video track using the LocalVideoTrack component from the SDK, passing it the localCameraTrack as the track prop:
return (
<div style={{ ...styles.grid, ...returnGrid(remoteUsers) }}>
<LocalVideoTrack track={localCameraTrack} play={true} style={styles.gridCell} />
{/* Remote videos here */}
</div>
);
}
We can display the remote users’ video tracks using the RemoteUser component. We’ll iterate through the remoteUsers array, passing each user as a prop to it:
return (
<div style={{ ...styles.grid, ...returnGrid(remoteUsers) }}>
<LocalVideoTrack track={localCameraTrack} play={true} style={styles.gridCell} />
{remoteUsers.map((user) => (
<RemoteUser user={user} style={styles.gridCell} />
))}
</div>
);
}
These components are unopinionated so you can style them as you like.
That’s all the code we need to build a video conferencing app with Agora and React. Here’s what the final code looks like:
function Videos(props: { channelName: string; AppID: string; token: string }) {
const { AppID, channelName, token } = props;
const { isLoading: isLoadingMic, localMicrophoneTrack } = useLocalMicrophoneTrack();
const { isLoading: isLoadingCam, localCameraTrack } = useLocalCameraTrack();
const remoteUsers = useRemoteUsers();
const { audioTracks } = useRemoteAudioTracks(remoteUsers);
usePublish([localMicrophoneTrack, localCameraTrack]);
useJoin({
appid: AppID,
channel: channelName,
token: token === "" ? null : token,
});
audioTracks.map((track) => track.play());
const deviceLoading = isLoadingMic || isLoadingCam;
if (deviceLoading) return <div style={styles.grid}>Loading devices...</div>;
return (
<div style={{ ...styles.grid, ...returnGrid(remoteUsers) }}>
<LocalVideoTrack track={localCameraTrack} play={true} style={styles.gridCell} />
{remoteUsers.map((user) => (
<RemoteUser user={user} style={styles.gridCell} />
))}
</div>
);
}
For the sake of completeness, here’s what the Form component looks like:
function Form(props) {
const { AppID, setAppID, channelName, setChannelName, token, setToken, setInCall } = props;
return (
<div>
<p>Please enter your Agora AppID and Channel Name</p>
<label htmlFor="appid">Agora App ID: </label>
<input id="appid" type="text" value={AppID} onChange={(e) => setAppID(e.target.value)} placeholder="required"/>
<br /><br />
<label htmlFor="channel">Channel Name: </label>
<input id="channel" type="text" value={channelName} onChange={(e) => setChannelName(e.target.value)} placeholder="required" />
<br /><br />
<label htmlFor="token">Channel Token: </label>
<input id="token" type="text" value={token} onChange={(e) => setToken(e.target.value)} placeholder="optional" />
<br /><br />
<button onClick={() => AppID && channelName ? setInCall(true) : alert("Please enter Agora App ID and Channel Name")}>
Join
</button>
</div>
);
}
That’s all it takes to put together a high-quality video conferencing app with the Agora React SDK. We’ve barely scratched the surface in terms of what’s possible. You can add a ton of features like virtual backgrounds, selective subscriptions, waiting rooms and so on. Learn more by visiting the docs and our API reference.
We’re looking for feedback on how we can improve the SDK in this beta period. Please contribute by opening issues (and submitting PRs) on our GitHub repo.