diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..80ebb2d --- /dev/null +++ b/404.html @@ -0,0 +1,619 @@ + + + +
+ + + + + + + + + + + + + + +Any of the parameters for the Stream
class can be passed to the WebRTC
component directly.
You can specify the track_constraints
parameter to control how the data is streamed to the server. The full documentation on track constraints is here.
For example, you can control the size of the frames captured from the webcam like so:
+track_constraints = {
+ "width": {"exact": 500},
+ "height": {"exact": 500},
+ "frameRate": {"ideal": 30},
+}
+webrtc = Stream(
+ handler=...,
+ track_constraints=track_constraints,
+ modality="video",
+ mode="send-receive")
+
Warning
+WebRTC may not enforce your constraints. For example, it may rescale your video
+(while keeping the same resolution) in order to maintain the desired frame rate (or reach a better one). If you really want to enforce height, width and resolution constraints, use the rtp_params
parameter as set "degradationPreference": "maintain-resolution"
.
You can configure how the connection is created on the client by passing an rtc_configuration
parameter to the WebRTC
component constructor.
+See the list of available arguments here.
Warning
+When deploying on a remote server, the rtc_configuration
parameter must be passed in. See Deployment.
The ReplyOnPause
class runs a Voice Activity Detection (VAD) algorithm to determine when a user has stopped speaking.
The following parameters control this argument:
+from fastrtc import AlgoOptions, ReplyOnPause, Stream
+
+options = AlgoOptions(audio_chunk_duration=0.6, # (1)
+ started_talking_threshold=0.2, # (2)
+ speech_threshold=0.1, # (3)
+ )
+
+Stream(
+ handler=ReplyOnPause(..., algo_options=algo_options),
+ modality="audio",
+ mode="send-receive"
+)
+
You can configure the sampling rate of the audio passed to the ReplyOnPause
or StreamHandler
instance with the input_sampling_rate
parameter. The current default is 48000
from fastrtc import ReplyOnPause, Stream
+
+stream = Stream(
+ handler=ReplyOnPause(..., input_sampling_rate=24000),
+ modality="audio",
+ mode="send-receive"
+)
+
You can configure the output audio chunk size of ReplyOnPause
(and any StreamHandler
)
+with the output_sample_rate
and output_frame_size
parameters.
The following code (which uses the default values of these parameters), states that each output chunk will be a frame of 960 samples at a frame rate of 24,000
hz. So it will correspond to 0.04
seconds.
from fastrtc import ReplyOnPause, Stream
+
+stream = Stream(
+ handler=ReplyOnPause(..., output_sample_rate=24000, output_frame_size=960),
+ modality="audio",
+ mode="send-receive"
+)
+
Tip
+In general it is best to leave these settings untouched. In some cases, +lowering the output_frame_size can yield smoother audio playback.
+You can display an icon of your choice instead of the default wave animation for audio streaming.
+Pass any local path or url to an image (svg, png, jpeg) to the components icon
parameter. This will display the icon as a circular button. When audio is sent or recevied (depending on the mode
parameter) a pulse animation will emanate from the button.
You can control the button color and pulse color with icon_button_color
and pulse_color
parameters. They can take any valid css color.
Warning
+The icon
parameter is only supported in the WebRTC
component.
You can supply a button_labels
dictionary to change the text displayed in the Start
, Stop
and Waiting
buttons that are displayed in the UI.
+The keys must be "start"
, "stop"
, and "waiting"
.
Warning
+The button_labels
parameter is only supported in the WebRTC
component.
webrtc = WebRTC(
+ label="Video Chat",
+ modality="audio-video",
+ mode="send-receive",
+ button_labels={"start": "Start Talking to Gemini"}
+)
+