I have been messing with the Ruby Docker API working towards using containers for RTSP Streaming services (rtsp input, ffmpeg, and then output). Allowing one connection per rtsp server works and spinning up one container per streamer and another for the ffmpeg job to read the output will allow writing output to an S3 file for easier streaming and more control over each stream.
Images
S3FS From a Container with FFMPEG
FROM alpine:3.14
RUN apk update && apk add git
RUN apk add build-base automake autoconf libxml2 fuse-dev curl-dev libxml2-dev ffmpeg
WORKDIR /app
RUN mkdir /app/s3mount
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse.git
WORKDIR /app/s3fs-fuse
RUN ./autogen.sh
RUN ./configure
RUN make
RUN make install
WORKDIR /app
# s3fs $BUCKET_NAME s3mount -o use_path_request_style,url=http://$MINIO_SERVER:9000
ENTRYPOINT s3fs $BUCKET_NAME s3mount -o use_path_request_style,url=$MINIO_SERVER && ffmpeg -v verbose -i $RTSP_INPUT_STREAM -c:v libx264 -c:a aac -ac 1 -strict -2 -crf 18 -profile:v baseline -maxrate 400k -bufsize 1835k -pix_fmt yuv420p -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 s3mount/$RTSP_LOCATION >&2
Command to run the s3fs container:
# Must run privileged for fuse!
docker run --add-host=host.docker.internal:host-gateway -e AWSACCESSKEYID=test1put -e AWSSECRETACCESSKEY=test1put -e MINIO_SERVER=host.docker.internal -e BUCKET_NAME=examplebucket -e RTSP_SERVER=host.docker.internal --restart=on-failure:5 --privileged --name s3fstest -dt s3fs-docker
This container will run 5 times with the name s3fstest
and then if the FFMPEG Command is unable to connect after 5 attempts to the container will disconnect.