[FFmpeg-user] Error with FFmpeg and Node Canvas for SubtitleO

Suraj Kadam suraj at subtitleo.com
Sun Sep 3 08:21:36 EEST 2023


We have a subtitles automation tool, made on ReactJS on FE. and NestJS on
BE.

So the flow is that on the frontend first we show the user a preview of the
subtitles, and give him options to modify the styling, the position of the
subtitles or the subtitle container, and on the preview everything is done
using CSS and JS.

We are looking for a sync up between the values from the frontend to apply
accurately on the backend.

Our current approach is by generating PNG images using node-canvas and then
streaming them out on the video at particular timestamps using
overlay_filter.

The first concern is of the scaling or the dimensions, as on the frontend
we are using VideoJS, and of course for better view we are resizing the
container as per devices etc, so the video's width & height on the frontend
is less (mostly) than the actual dimensions of the video.

While using that dimensions, the subtitles are too short to be rendered on
the video (smaller) as the width and height of the canvas gets shorter.

And also for the position, we are using react-draggble library for making
the subtitles draggable and place them on video In the preview, still we
lack on syncing up both the positions on the backend.

We are just looking for a proper roadmap or solution for these blockers.

Would appreciate your help on this.

Thanks.

Here's the current implementation of the code:

wrapText(context, text, x, y, maxWidth, lineHeight) {
    const words = text.split(' ');
    let line = '';

    for (let n = 0; n < words.length; n++) {
      const testLine = line + words[n] + ' ';
      const metrics = context.measureText(testLine);
      const testWidth = metrics.width;
      if (testWidth > maxWidth && n > 0) {
        context.fillText(line, x, y);
        line = words[n] + ' ';
        y += lineHeight;
      } else {
        line = testLine;
      }
    }
    context.fillText(line, x, y);
  }
  generateSubtitlePNG(transcription: any, outputPath: string, w, h) {
    const canvas = createCanvas(720, 200);
    const ctx = canvas.getContext('2d');

    ctx.fillStyle = 'rgba(0, 0, 0, 0)';
    ctx.fillRect(0, 0, canvas.width, canvas.height);

    const fontSize = 41;
    ctx.font = `bold ${fontSize}px Arial`;
    ctx.fillStyle = 'white';
    ctx.textAlign = 'center';

    const maxWidth = canvas.width; // 20 pixels padding on each side
    const lineHeight = fontSize * 1.4; // Adjust as needed
    const x = canvas.width / 2;
    const y = (canvas.height - lineHeight) / 2 + fontSize / 2; // Adjusted
for font height

    this.wrapText(ctx, transcription.text, x, y, maxWidth, lineHeight);

    const buffer = canvas.toBuffer('image/png');
    fs.writeFileSync(outputPath, buffer);
  }
  async applySubtitlesNew(
    addSubtitlesDto: AddSubtitlestDto,
    userId: number,
    project: any,
  ) {
    if (userId !== project.user.id) {
      throw new UnauthorizedException();
    }

    const videoPath = `uploads/${project.originalVideoFile}`;

    const transcriptions = JSON.parse(addSubtitlesDto.transcriptions);
    const pngPaths = [];
    const dimensions = await this.getVideoDimensions(videoPath);
    console.log(dimensions);
    // Generate PNGs for each transcription
    for (let i = 0; i < transcriptions.length; i++) {
      const outputPath =
`subtitles/${project.originalVideoFile}-subtitle-${i}.png`;
      this.generateSubtitlePNG(
        transcriptions[i],
        outputPath,
        dimensions?.width,
        dimensions?.height,
      );
      pngPaths.push(outputPath);
    }
    let filterComplex = '[0:v]';
    for (let i = 0; i < pngPaths.length; i++) {
      const start = transcriptions[i].start.toFixed(2);
      const end = transcriptions[i].end.toFixed(2);
      const overlayX = '(W-w)/2';
      const overlayY = `H-h-10`;
      filterComplex += `[${
        i + 1
      }:v]
overlay=${overlayX}:${overlayY}:enable='between(t,${start},${end})'`;
      if (i < pngPaths.length - 1) {
        filterComplex += '[vout];[vout]';
      }
    }

    console.log(filterComplex);
    // Run FFmpeg
    return new Promise((resolve, reject) => {
      const ffmpegCommand = ffmpeg();

      // Add video input
      ffmpegCommand.input(videoPath);

      // Add PNG inputs
      for (const pngPath of pngPaths) {
        ffmpegCommand.input(pngPath);
      }

      ffmpegCommand
        .complexFilter(filterComplex)
        .outputOptions('-c:v', 'libx264')
        .output('output.mp4')
        .on('end', () => {
          // Handle completion
          // Clean up PNG files
          for (const pngPath of pngPaths) {
            fs.unlinkSync(pngPath);
          }
          resolve('done');
        })
        .on('error', (error, stdout, stderr) => {
          console.log(stderr);
          // Handle error
          reject(error);
        })
        .run();
    });
  }


More information about the ffmpeg-user mailing list