Skip to main content

Detailed Explanation of Advanced Settings

Justin62628About 30 min

The following content will introduce you to the advanced settings part of the software

Basic Task Settings

Workflow Recovery

Automatic Configure

Tips

  • When encountering a situation where the task is interrupted due to power outage or other unexpected situations that terminate the task and cause the program to exit, you can restore the last block position by clicking on Automatic Progress Search.
  • You can also directly drag the project folder into the software, and the software will automatically search for the progress corresponding to the project folder.

Please click on the task entry to restore the progress first before clicking this button. Then click "Interpolate", and the software will pop up a window to confirm the starting position of frame filling for you.

Start point and End point

You can select the time period that needs processing

Input format: hours:minutes:seconds

Warning

After specifying the start and end frame filling moments, manual termination or power outage may lead to the failure of progress restoration

Start Block Count and Start Input Frame Count

Used when automatic progress search fails or when it is necessary to manually specify the starting position of frame filling, which can be used to manually restore the progress of frame filling.

  • Start Block Count = The last chunk number exported in the output folder + 1 (for example, in the figure, chunk-001, the start block count should be 1+1=2).
  • Start Input Frame Count = Single output block size in the output quality setting (rendering setting) * (Start Block Count - 1).

As shown in the above figure, a video chunk has 1000 frames.

Restore to Origin

Set the start block and start input frame count to the system default value, and the software will automatically search for the restoration point and restore the task progress.

Risk Mode

When it is necessary to restore the task progress, enabling this item can reduce the time required for the program to restore the progress, but enabling it may cause audio-video out of sync.

Not recommended to enable.

Advanced software settings

Transition recognition

Enable transition recognition

Identify scene switches

To avoid the jelly effect when switching scenes during frame filling, it is recommended to enable transition recognition.

After checking the Enable Transition Recognition option, the default value for the parameter below is usually 12; if you find that the final exported video is rather choppy, you can consider adjusting it to 15; if you find that there is a significant amount of jelly effect, you can consider adjusting the parameter value to 9, and the range of the parameter value is typically between 9 and 15.

As shown in the picture: jelly produced by missed judgment in transition

Warning

Due to the fact that this transition recognition mechanism is designed based on long video input, for some short video input (2-3 seconds), it is recommended to disable this transition recognition function or use third-party software to generate transition data for manual transition processing to avoid jitter caused by poor automatic recognition performance.

Maximum recognition threshold (default does not need to be adjusted)

When using a fixed transition recognition is not enabled (default), the recommended value for this option is 80-90.

When using a fixed transition recognition is enabled, the recommended value for this option is 40-60.

Use a fixed transition recognition

Use a fixed threshold (maximum recognition threshold) to identify transitions (unstable), at which point the software will perform a similarity detection on every two frames of the video.

If the similarity is greater than the threshold, it is considered a transition frame. This mode is prone to false positives or missed detections, and is only recommended to be used when there are many missed detections with the default transition detection method, such as in a mashup with a large number of lenses.

Manual selection of transition support

Display when the "Transition Recognition" button is turned off

The Json path option fills in the path of the transition list file exported by the Transition Chooser for the video, see Usage Tutorialopen in new window.

This method enables the import of transitions manually marked using TC software to replace the automatically recognized transitions, providing full control over where to fill frames and where not to in the input video.

Other transition detection settings

Output transition frame

Output the transition frame in the video.

The transition frame will be accompanied by relevant decision information and output in the scene folder of the project folder in png format. The project folder will be retained.

Transition uses frame blending

The traditional method is to copy the previous frame as the transition frame. This method is to blend the previous and next frames (gradually) to reduce the jitter caused by copying the transition frame.

Transition interpolation

The traditional method is to copy the previous frame as the transition frame. This method uses the first two frames of the transition frame and uses the frame filling algorithm for interpolation to reduce the jitter caused by copying the transition frame.

Warning

It is not recommended to enable this option when using the RIFE algorithm, otherwise jelly will be introduced.

Output Resolution Settings

Output File Resolution

The dropdown box is used for resolution preset selection.

When the preset is Custom (user-defined), you can set the final output resolution of the video. SVFI will adjust the resolution of the picture first, and then perform frame filling.

Crop Black Bars

Can be used to crop the black bars in the video, and the width and height need to be specified manually.

Example: Video Note resolution 3840x2160, actual picture resolution 3840x1620, then the height here is filled in as 270 = (original height - actual height) ÷ 2.

Note: If AI super-resolution is used, the video here refers to the final output video

Example: Input video 1920x1080, actual resolution 1920x800, super-resolution 2x output 3840x1600. Then the black border height is filled in as 280, and the output resolution can be customized to 3840x1600

Tips

If both width and height are entered as -1, SVFI will automatically identify the black bars of the input video and crop them.

Fill in Black Bars After Processing

After cropping the black bars, perform processing (frame filling or super-resolution), and automatically add the black bars back after filling the frames.

Tips

This can reduce the amount of computing per frame to some extent and speed up the processing.

Use AI Super-Resolution - to Make Video Images Clearer

Tips

This feature requires the purchase of the Professional DLCopen in new window.

Warning

Performing frame filling and super-resolution simultaneously will consume more video memory, and insufficient video memory may cause the task to fail.

If the video memory is less than 10G, it is recommended to use Encode to complete the super-resolution first, and then perform frame filling.

Perform Super-Resolution After Frame Filling

Perform frame filling first, and then perform super-resolution (this usually slows down the speed, but reduces the video memory usage and often achieves better results).

Load the Graphics Card

Specify which graphics card to use for super-resolution.

Super-Resolution Algorithm

Currently, SVFI supports the following super-resolution algorithms.

Algorithm NameApplicable MaterialsRequires BETAAvailable on AMD GPUs
Anime4KAnime
AnimeSRAnime×
realCUGANAnime×
ncnnCuganAnime
waifuCudaAnime×
PureBasicVSRLive Action×
BasicVSR++ T3Live Action×
ATDLive Action×
realESRGeneral×
ncnnRealESRGeneral
waifu2xGeneral
TensorRT(ONNX)General×
CompactGeneral×
SPANGeneral×

Tips

SVFI defines the distinction between anime materials and live-action materials as follows:

Anime materials are moving video clips mainly composed of flat image layers, and the boundaries between each layer and the other layers are clear. For example, hand-drawn 2D animation, most three-dimensional rendered two-dimensional pictures, etc.

Live-action materials are real-world pictures or computer-generated pictures captured using a single-view camera, and the individual layers and their boundaries cannot be distinguished by the naked eye. For example, live-action movies, 3D CG, 3D game pictures, etc.

In particular, we consider animations made with 3D/3G backgrounds + 2D characters to be in the anime material category.

Introduction to the Super-Resolution Model

realCUGAN

Exclusive for anime, the effect is very excellent

  • up2x represents a 2x magnification, and 3x, 4x, etc. are similar.
  • The pro model is an enhanced version, see official introductionopen in new window for details.
  • Models with the word "conservative" are conservative models.
  • Models with "no-denoise" do not perform noise reduction.
  • Models with "denoise" perform noise reduction, and the number behind represents the noise reduction intensity.

ncnnCUGAN

The NCNN version of CUGAN (universal for AMD GPUs, NVIDIA GPUs, and I cards), the introduction is the same as above.

waifuCuda

Used for anime super-resolution, the speed and effect are somewhat similar to cugan.

realESR

Applicable to both 3D anime, more suitable for anime

  • The RealESRGAN model tends to fill in the blanks, making the picture clearer and more vivid.
  • The RealESRNet model tends to smudge, but the picture retains its original color.
  • Models marked with "anime" are dedicated for anime super-resolution, and the speed is slightly faster than the previous two.
  • anime is the official model, and anime_110k is a self-trained model.
  • RealESR_RFDN is a self-trained super-resolution model with fast speed and is suitable for anime input.

ncnnRealESR

The NCNN version of realESR, universal for AMD GPUs, I cards, and NVIDIA GPUs.

  • realesr-animevideov3 (a relatively conservative anime video super-resolution model, with fast speed and high quality)
  • realesrgan-4xplus (4x magnification model)
  • realesrgan-4xplus-anime (4x anime magnification model)

AnimeSR

An anime super-resolution algorithm developed by Tencent ARC Lab

Only one 4x magnification model (AnimeSR_v2_x4.pth), the effect is more conservative compared to cugan.

BasicVSRPlusPlusRestore

A real-world super-resolution algorithm that depends on the length of the super-resolution sequence for effect.

Tips

This algorithm is only available in the beta version of the public test.

Warning

This series of algorithms consume a lot of video memory, it is recommended to use a graphics card with more than 6G.

  • basicvsrpp_ntire_t3_decompress_max_4x 4x magnification decompression model t3 (recommended)
  • basicvsrpp_ntire_t3_decompress_max_4x_trt 4x magnification decompression model t3 (TensorRT acceleration) (difficult to compile, not recommended)

Anime4K

A super-fast real-time anime super-resolution algorithm, relatively conservative

There are 6 preset scripts in total.

  • Anime4K_Upscale_x2 A/B/C/D are all 2x magnifications (default is A).
  • Anime4K_Upscale_x3 is 3x magnification, and the x4 model is similar.

Custom Anime4K models

  • In the installation folder models\sr\Anime4K\models, you can see the .json model configuration file.
  • Take Anime4K_Upscale_x2_A.json as an example.
{
  "shaders": [
    {
      "path": "Restore/Anime4K_Clamp_Highlights.glsl", "args": []
    },
    {
      "path": "Restore/Anime4K_Restore_CNN_VL.glsl", "args": []
    },
    {
      "path": "Upscale/Anime4K_Upscale_CNN_x2_VL.glsl", "args": ["upscale"]
    }
  ]
}
  • Among them, Anime4K_Clamp_Highlights.glsl and Anime4K_Restore_CNN_VL.glsl are 1x restoration algorithms, corresponding to models\sr\Anime4K\Restore\Anime4K_Clamp_Highlights.glsl. The args parameter of this model needs to be left empty.

  • Anime4K_Upscale_CNN_x2_VL.glsl is a 2x magnification algorithm, corresponding to models\sr\Anime4K\Upscale\Anime4K_Upscale_CNN_x2_VL.glsl. The args parameter of this model needs to be filled in with upscale.

  • Similar to the Anime4K_AutoDownscalePre_x2.glsl model, the args parameter needs to be filled in with downscale.

  • The order of the list is the actual calling order of the filters, and you can observe the model folder to freely combine, edit or create a new .json file to take effect.

waifu2x

A classic conservative super-resolution algorithm

  • The cunet model is used for anime super-resolution.
  • The photo model is used for real-world shooting.
  • anime is used for anime super-resolution.

Compact

Tips

This algorithm is only available in the beta version of the public test of the professional DLC, and you need to manually go to the Steam settings - beta version to select it.

A super-resolution model structure, some models such as AnimeJanai are trained based on this structure.

AnimeJanai

Applicable to both 3D anime, more suitable for anime

  • A weakened version of RealCUGAN, with poor depth-of-field recognition (easy to sharpen the background), less computing power and faster speed.
  • Speed: UltraSuper > Super > Compact model.

SPAN

Tips

This algorithm is only available in the beta version of the public test of the professional DLC, and you need to manually go to the Steam settings - beta version to select it.

A super-resolution model structure, some model series such as Nomos are trained based on this structure.

TensorRT

Dedicated acceleration for the NVIDIA GPU of some of the above super-resolution algorithms

  • All models of cugan can be accelerated.
  • real-animevideov3 is a model specifically prepared for anime video super-resolution in RealESR.
  • RealESRGANv2-animevideo-xsx2 2x anime video super-resolution magnification model.
  • RealESRGANv2-animevideo-xsx4 4x anime video super-resolution magnification model.

Warning

Since pre-compilation is required for processing using TRT, do not enable more than 1 thread when using TRT encoding for the first time.

If an error occurs when using it for the first time, please try five or six times.

If the error still occurs, please contact the developer.

In theory, the effect is the same as the non-TRT version, but there are differences in individual scenarios.

Visual Comparison Demonstration of Super-Resolution Models

Add Super-Resolution Models on OpenModelDB by Yourself

SVFI supports adding super-resolution model weights that meet the requirements by oneself.

OpenModelDBopen in new window supports the model structure as shown in the following figure

Among them, the ones compatible with SVFI are Compact, SPAN, ATD, ONNX (TensorRT).

Example: Adding Compact or Compact Model

  • Search for Aniscale, and you can see the model to be tested, AniScale-2-Compact
  • Click to enter the first generation of Aniscale.
  • Pay attention to the model information Size on the right side, 64nf represents the number of features ("model channel number"), and 16nc represents the number of convolutions ("model depth").
  • The strategy for SVFI to load Compact models is as follows:

    • If the model name contains super ultra (from animejanai), nf=24, nc=8;
    • If the model name contains ultra (from animejanai), nf=64, nc=8;
    • Default nf=64, nc=16.
  • Looking back at Aniscale-2-Compact, it is found that there is no model information description on the web page, so it is considered that it uses the default model structure configuration, nf=64, nc=16.

  • Just download the pth model directly to SVFI\models\sr\Compact\models and it can be used. If there is no such folder, please create it manually.

  • The same is true for importing the SPAN model.

SVFI can currently only load models with nf=48, and other models are not supported for the time being. Other modified models are also not supported.

Example: Adding TensorRT Model

You can also add other supported super-resolution models such as AnimeJanaiopen in new window.

The onnx requirements of the super-resolution model supported by SVFI are as follows:

  • There is only one input, and the dimension is [dynamic, 3, dynamic, dynamic].
  • There is only one output, and the dimension is [dynamic, 3, dynamic, dynamic].
  • The input node name is input, and the output node name is output.

Put it in SVFI\models\sr\TensorRT\models.

Model Compilation Instructions

  • After the model is compiled, a .engine file will be generated. For example, realesrgan_2x.onnx.540x960_workspace128_fp16_io32_device0_8601.engine indicates that the input size (cutting size) of the model is 540x960.
  • Different cutting sizes will lead to completely different super-resolution speeds, so the cutting block size should be carefully selected, and try not to enable the cutting block.

Other Model Rules

  • Under the default state of esrgan, only models with nf=64, nb=23 are supported.
  • When the model name contains anime, nb will be recognized as 6.

Terminology Explanation

  • nf => number of features,
  • nc => number of convs,
  • nb => number of blocks

Introduction to Some Special Models Placed in the Super-Resolution Category

InPaint Watermark Removal Model

Tips

This algorithm is only available in the beta version of the professional DLC, and you need to manually go to the Steam settings - beta version to select it.

  • inpaint_sttn_1x: Currently, this model only supports one-time restoration and has no super-resolution function. It needs to be used with the mask function:

The activation process is as follows:

  1. Enable the super-resolution function and select the correct model
  1. Enable the player function
  1. Enable the mask function
  1. Draw the mask and save it

This model will automatically identify and remove static watermarks in each mask area. Please make sure there is enough dynamic change content in the mask area, otherwise it cannot be automatically identified.

Warning

This model has poor performance in identifying and removing watermarks on solid background/static content.

  1. Click Encode to start removing watermarks

Introduction to Other Super-Resolution Options

Super-Resolution Model Magnification

The super-resolution magnification of the currently selected model

Transfer Resolution Ratio

That is, the pre-scaling function: first scale the original video by the percentage set by the user, and then perform super-resolution

Example: Original video: 1920x1080, transfer resolution ratio: 50%, model magnification: 4x

At this time, the software running process is: 1920x1080 (input) -> 960x540 (down-scaled by 50%) -> 3840x2160 (super-resolution)

Tips

  • For restoration models, the transfer resolution will be forced to be set to 100%.

Warning

  • SVFI will only perform one super-resolution or restoration process on each frame, which means that when the user sets the output resolution to 400% but uses the 2x model for super-resolution, SVFI will only super-resolve the original video once to 200% using the super-resolution model, and then stretch it to 400% using bicubic.
  • Therefore, using 100% transfer resolution, using a 2x model for 400% super-resolution, and using a 4x model for 200% super-resolution will have different effects.
  • When the super-resolution magnification is inconsistent with the model magnification, using the cutting block may cause the output video to be garbled.

Tiling block mode

Dedicated to certain models, the more you cut, the more video memory you save, and the slower the speed

  • No Tile: Do not use cutting

  • 1/2 on Width: Split horizontally in half

  • 1/2 on both W and H: Split horizontally and vertically in half

  • 1/3 on w & h: Cut horizontally and vertically into three equal parts

  • 1/4 on w & h: Cut horizontally and vertically into four equal parts

RealCUGAN Low Video Memory Mode

Dedicated to realCUGAN, to be used when the video memory of the graphics card is insufficient

  • None: Do not use the low video memory mode

  • Low VRAM Mode: Enable the low video memory mode, which may affect the picture quality

Tiling Size

  • There are presets for the size of the video memory, and you can also choose to customize the adjustment

Suggested operation when encountering video memory shortage

  • For graphics cards with less than 6G of video memory, if the video memory is insufficient, directly enable the cutting block, and keep other options default.
  • For more than 6G, try not to enable the cutting block, and enable the cutting mode. If it is still out of video memory when the maximum (1/4) is turned on, turn off this setting and directly enable the cutting block, and try the options from large (512) to small in turn.
  • For 4G or less video memory, please enable the low video memory mode and directly enable the cutting block.

Warning

It is not recommended to enable when using realCUGAN

Super-Resolution Strength

Only used for the RealCUGAN super-resolution model series

For non-TensorRT models: the smaller the value, the clearer and sharper the image, and the larger the value, the more conservative and stable (recommended value range 0.5-1.2)

For TensorRT models, the opposite is true: the smaller the value, the more blurred the image, and the upper limit is 1.

Super-Resolution Threads

When there are multiple graphics cards or the graphics card occupancy is not fully utilized, you can try to increase this value (by 1 at a time)

Super-Resolution Sequence Length

Only valid when algorithms such as BasicVSR series and InPaint that require multi-frame input are selected

  • The larger the super-resolution sequence length, the more frames are input in a single super-resolution, and the texture is more stable, but at the same time, the video memory usage will increase.
  • It is recommended to keep this value above 10. If the video memory is insufficient, it is recommended to reduce the picture resolution and ensure that the value is above 5.
  • For the watermark removal (InPaint) model, this value is generally recommended to be above 30 to obtain a better watermark removal effect.

Super-Resolution Using Half-Precision

  • It is recommended to enable, which can greatly reduce the video memory usage and have little impact on the picture quality.

Caution

When using NVIDIA 10xx series Pascal architecture graphics cards, enabling this option will slow down the super-resolution speed and may cause the output to be black. It is recommended to turn off this option.

TTA

Only supported by ncnnCUGAN, in exchange for a small improvement in image quality at the cost of a large amount of time consumption

Using AI Enhancement Algorithms

Tips

This feature is only available in the beta version of the public test

FMNet - SDR2HDR10: Use AI algorithm to convert SDR video to HDR10
DeepDeband: Use AI algorithm to remove color bands (this may cause the picture color to turn pink)

Output Settings (Encode Parameter Quality)

Rendering Quality CRF

Used to adjust the quality loss when the video is exported, which is positively correlated with the output bitrate.

Using different compression codecs and compression presets will have an impact on CRF.

The CRF numerical parameter is generally 16, which is lossless to the naked eye;

For H.265 encoding, the bitrate will be significantly reduced. Please use the visual quality of the picture to determine whether the CRF numerical value is reasonable.

If it is used as a collection, the CRF numerical parameter can be set to 12.

The smaller the CRF value, the less the loss of the picture after the operation, and the larger the volume (bitrate) of the exported finished video.

Note: For the same value, the output quality of different codecs is different

Tips

When adjusting the output video bitrate, if you are not familiar with CRF, please use the default parameter 16 or learn relevant knowledge through Baidu.

Target Bitrate

As an alternative option to render quality CRF, it is basically the same as the settings standards of Premier Pro, After Effects, and DaVinci Resolve

Codec

  • AUTO
    Automatically determine the codec option based on the slider below the software
  • CPU
    Select this option for compression, the quality is the highest, but the CPU usage rate is also the highest. The performance of the CPU determines whether the frame interpolation or super-resolution process will be blocked (resulting in a decrease in the graphics card usage) and the length of time it takes to complete the final operation.
  • NVENC
    This option is only for NVIDIA graphics cards that support the NVENC function. If your graphics card does not support the NVENC function, please do not select this option.
    Please refer to the NVIDIA NVENC Gen.pdf in the installation directory to check whether your graphics card supports NVENC
  • VCE
    This option is only for AMD graphics cards that support the VCE function. If your graphics card does not support the VCE function, please do not select this option.
  • QSV
    This option is only for users with Intel integrated graphics (such as Intel UHD 630, IrisPro 580). Non-such users should not select this option.

Tips

The following codecs need to purchase the Professional DLCopen in new window

  • NVENCC is an optimized version of NVENC, with faster processing speed and better work quality.
  • QSVENCC is an optimized version of QSV, with higher efficiency in completing tasks.
  • VCENCC is an optimized version of VCE, with higher efficiency in completing tasks.

Manually specify the GPU used by the hardware encoder

In the custom compression command line option of the advanced settings,

  • When using the encc encoder, fill in -d||<gpu> to control the used encoding GPU, such as -d||0
  • When using the ffmpeg nvenc encoder, fill in -gpu||<gpu> to control the used encoding GPU
  • When using the ffmpeg vce, qsv encoder, fill in -init_hw_device||qsv=intel,child_device=<gpu> to control the used encoding GPU

Sensible comparison:

CodecUse HardwareSpeedQualityFile SizeSelection Suggestions
CPUCPUMediumHighMediumUsers who pursue image quality and encoding stability, and AMD GPU users and AU users
NVENCNVIDIA GPUFastMediumLargeUsers who pursue both speed and quality, and are not sensitive to size
QSVIntel Integrated GraphicsFastMediumLargeUsers who pursue both speed and quality, and are not sensitive to size

Select the compression codec

For the selection of this function, you need to have certain video compression knowledge.

If you are not familiar with compression, please keep the following rules in mind:

  • HDR output must select H.265 10bit encoding
  • For resolutions above 2K, H.265 encoding must be selected: especially 4K, 8K resolutions
  • If there are problems with both H.264 and H.265 encoding, use ProRes encoding. This encoding output is closest to the lossless to the naked eye, and the bitrate is extremely large. It is an intermediate encoding format used for editing work.
  • It is recommended to use H.265 fast encoding or ProRes encoding
  • When a Broken Pipe error occurs, please directly use H265 encoding. Please note that the above encoding has the highest resolution and frame rate limitations,
  • Please do not deliberately pursue too high a resolution and frame rate at the same time: such as 8K 120fps

Tips

  • CPU encoding is software encoding, and software encoding generally has slow speed, small files, and good quality.
  • NVENC, QSV, and VCE are hardware encodings, where NVENC uses nVidia graphics cards, QSV uses Intel integrated graphics, and VCE uses AMD graphics cards. The characteristics of hardware encodings are fast speed, large files, and in the case of low bitrate and small files, the quality is worse than that of the CPU.
  • Hard encoding gives priority to NVENC. In the NVIDIA GPU hard encoding preset (you can hover the mouse to view the description), you can query the hard encoding preset level of your own graphics card on the driver official websiteopen in new window. Generally, 20 and 30 series are 7th+.
  • Hard encoding will put a certain load on the graphics card. If the Broken Pipe error occurs when using NVENC, please reduce the NVIDIA GPU hard encoding preset or switch to the core display encoding QSV.
  • If there is still the same error, use the CPU.

Other general suggestions

  • If the output is only for personal viewing and the requirements for compression quality are low, please try to use hardware encoding (NVENC, VCE, QSV, etc.) to avoid CPU compression bottlenecks. CPU bottlenecks will cause a decrease in the graphics card usage rate, which in turn will cause a decrease in the task speed

Select the compression preset

  • CPU: The English meaning is the faster the speed, the lower the quality, and vice versa.

  • NVENC (edicated to NVIDIA GPU): It is recommended to select p7 without thinking

  • QSV (dedicated to Intel graphics card): Select slow directly

  • VCE (dedicated to AMD GPU): Select quality directly

  • NVENCC (dedicated to NVIDIA GPU): Select quality directly

  • QSVENCC (dedicated to Intel graphics card): Select best directly

  • VCENCC (dedicated to AMD GPU): Select slow directly

Use zero-delay decoding and encoding

It is only valid when H264 or H265 is selected at the compression codec.

Using this feature can reduce the video decoding pressure, and is suitable for scenarios that require fast decoding and low latency, such as:

  • When uploading video works to platforms such as BiliBili and Youtube, to avoid jitter transcoding
  • When playing ultra-high-definition and ultra-high-frame-rate content on VR headsets
  • When the player decodes the screen is distorted

Warning

This feature does not work when the input is HDR

NVIDIA GPU hard encoding preset

When choosing the NVENC encoder, the NVIDIA GPU hard encoding preset can reduce the export video size without changing the picture quality. You need to query which generation of NVENC compression chip your NVIDIA GPU is. If it exceeds 7th, directly select 7th+.

Default compression scheme

Using the traditional compression scheme, the compatibility is strong, and the export video size may increase.

Tips

Enabling this feature can solve most broken pipe problems.

Secondary compression audio quality

  • Re-encode the audio, generally used on videos uploaded to the platform
  • Compress all audio tracks in the video to 640kbps aac format.

HDR Strict Mode

Process HDR content with strict presets, enabled by default

Compatible with DV HDR10

Enable HDR10 compatibility when outputting Dolby Vision, enabled by default

One-click HDR: Convert SDR video to HDR10+

Four one-click HDR modes need to be tried by yourself

Decoding Quality Control

Use vspipe for pre-decoding

Tips

This feature requires the purchase of the Professional DLCopen in new window.

Using vspipe as pre-decoding, this function is a prerequisite for many specific functions (such as deblocking, quick noise addition, QTGMC deinterlacing),

If you find that it cannot decode the input or the task reports an error, please turn off this option.

Tips

You can modify the vspipe.py template file in the software installation directory by yourself to add custom filters such as dpir. You can also modify and use the order of upscaling and frame interpolation with VSPipe based on it.

Full VSPipe Workflow

Tips

This feature requires the purchase of the Professional DLCopen in new window.

The entire process is processed using vspipe to reduce unnecessary calculations. It is the mode that can achieve the fastest speed under the same settings of the current SVFI.

Only supports TensorRT-accelerated super-resolution and some frame interpolation models.

If this setting is enabled for frame interpolation, the spatio-temporal linearization smoothness optimization will be forced to be enabled.

Hardware Decoding

It can reduce the decoding pressure of large-resolution videos, but may reduce the picture quality to a certain extent, and cause the frame interpolation module to run out of video memory when the video memory is tight.

Fast Frame Splitting

Fast frame splitting operation can reduce decoding pressure, but may cause color deviation in the picture.

High-Precision Optimization Workflow

Tips

This feature requires the purchase of the Professional DLCopen in new window.

  • If the CPU performance is excessive, it is recommended to enable this feature, which can solve most color deviation problems and can solve the color cast problem caused by HDR video compression to the greatest extent. This feature will increase the CPU burden and even affect the frame interpolation speed.
  • Enabling this feature for super-resolution work will disable half-precision (requires more video memory). Please choose according to your needs.

Tips

It is recommended to enable this option when inputting HDR videos.

Enable Deinterlacing

Tips

This feature requires the purchase of the Professional DLCopen in new window.

  • Use ffmpeg to perform deinterlacing processing on the input interlaced video.

  • When using vspipe for pre-decoding, use QTGMC deinterlacing to process the picture.

Fast Noise Reduction

Tips

This feature requires the purchase of the Professional DLCopen in new window.

The "Fast" option in this column, if there is no special need, please keep it closed, otherwise it will slow down the task processing speed.

Tips

It is recommended to test this option by controlling the variables yourself to see if it is helpful for improving the picture quality.

Not compatible with high-precision optimization workflow

Quick Noise Addition

Add noise to the video, often used when super-resolving the video.

Custom Frame Splitting Parameters (Professional Option)

Used to replace the parameters used by ffmpeg or vspipe for decoding, and custom parameters are separated by ||.

Custom Encoding Settings

Specify the number of encoding threads

When the encoder is CPU, there is a chance to control the CPU usage rate to control the rendering speed.

Custom Compression Parameters

This feature is a professional option (note that the number of input items must be even),

The key values are separated by ||

Example Custom compression parameters for CPU h265 compression:

-x265-params||ref=4:me=3:subme=4:rd=4:merange=38:rdoq-level=2:rc-lookahead=40:scenecut=40:strong-intra-smoothing=0

Time Remapping: Change the Speed of the Video

Tips

This feature requires the purchase of the Professional DLCopen in new window.

  • This feature is used to create "slow motion" materials.

  • For example, if the output frame rate is set to 120 frames and the time remapping is set to 60 frames, the output effect is equivalent to 50% slow playback of the playback speed.

  • Similar to other situations, you can set the output frame rate by yourself, support decimals.

Warning

For anime materials, please try to enable Forward Weighting in the Video Smoothness Optimization of the Frame Interpolation Settings as much as possible.

Or use software such as Premiere to reduce the frame rate of the original video to remove the repeated frames to avoid jitter after remapping.

The frame rate of the original video is generally reduced to 8 or 12 fps

Start and End Looping

Put the last frame in the first frame to adapt to some looping videos that are connected end to end.

Tips

Under normal circumstances, the end will miss (output frame rate / input frame rate) frames because there are no new frames that can be filled in, which is normal. But it is not affected in the loop mode, because there are always beginning frames that can be used as frame-filling pairs with the end frames to fill in the remaining frames.

IO Control

Manually Specify Buffer Memory Size

If the running memory is tight (below 16G), it is recommended to manually specify the size of the buffer memory to 2-3G to avoid out of memory errors.

Single Output Chunk Size

  • For frame interpolation and compression tasks, every frame rendered for this value will output a small clip without audio for you to preview the effect.
  • The clips will be generated in the output folder you set, and merged into one file after the frame interpolation or compression task is completed.

Retain the project folder after the task is completed

Do not delete the project folder after the task is completed.

Frame Interpolation Settings

Safe Frame Rate

If the video is to be uploaded to the corresponding media platform for online viewing, please enable this option.

This option will convert the output correctly to the corresponding NTSC format video (such as 60000/1001) when the input is an NTSC format video (such as a video with a frame rate of 24000/1001), to avoid audio-video asynchrony. If not enabled, audio-video asynchrony may occur (such as an output of 59994/1000).

It is recommended to keep this option enabled

Warning

If this option is not enabled, when the input is a non-standard frame rate (such as 119800/1000), the output mkv may become a variable frame rate video due to mkvmerge.

Try to use videos with standard input frame rates for processing to avoid audio-video asynchrony

Half-Precision Mode

It can reduce the video memory usage, and has acceleration effect for NVIDIA graphics cards of 20 series, 30 series, 40 series and above

Warning

May cause a decrease in picture accuracy.
For example, when using the gmfss model for frame interpolation, it may cause the output video to have a grainy feel

Reverse Optical Flow

This feature can make the picture smoother to a certain extent.

Tips

If the cudnn status error occurs when using the GMFSS pg 104 frame interpolation model, please turn off the reverse optical flow.

Enabling this feature may cause artifacts around moving objects in some models (such as Gmfss pg104). It needs to be selectively enabled or disabled after repeated experiments by yourself. The same applies to other similar functions.

Optical Flow Scale

This is the optical flow resolution scaling factor used by SVFI when performing optical flow calculation using the frame interpolation algorithm. 0.5 means that the input picture is scaled by half and then the optical flow calculation is performed to improve the performance or effect of certain algorithms.

  • When using the RIFE algorithm, when the original video size is 1080P, the default is 1.0; 4K and above is 0.5; less than 1080P is 1.0

  • When using the GMFSS algorithm, when the original video size is 1080P, the default is 1.0; 4K and above is 0.5; less than 1080P is 1.0

Warning

When using the GMFSS algorithm, it is not recommended to fill in a value lower than 1.0 for the option when the original video size is less than or equal to 1080P

Interlaced Frame Interpolation

  • Equivalent to a special cut, used to reduce video memory usage, there will be no screen tearing, but the picture will be blurred

  • Choosing this option appropriately can allow a small video memory graphics card to interpolate an ultra-high-resolution (such as 4G to 8K)

Video Smoothness Optimization

Warning

This series of options is only used for anime input or live-action materials with duplicate frames.

It is not recommended to enable this option for real-shot materials in general.

MethodApplication ScenariosSpeedSmoothnessNumber of Jellies
Spatio-Temporal LinearizationUniversal★★☆★☆☆☆☆☆
Fixed Threshold De-WeightingUniversal★★★★☆☆☆☆☆
Remove One Frame per TwoAnime★★★★★☆★☆☆
Remove One Frame per Two and One Frame per ThreeAnime★★★★★☆☆☆☆
First-Order Difference De-WeightingAnime★★☆★★☆★★☆
Spatio-Temporal ResamplingAnime★★☆★★★★★★
Forward De-WeightingAnime☆☆☆★★★☆☆☆
Cross ReconstructionAnime★☆☆★★★☆☆☆
Smooth DifferenceAnime, One-time Restoration///

Note: The fewer the number of jellies, the better the video quality; the more stars, the more likely the algorithm will output jellies.

Explanation:

  • Spatio-Temporal Linearization: Solves the jitter caused by the asymmetry problem during frame interpolation, and has a certain smooth and stable effect on any video (also known as TruMotion)
  • Fixed Threshold De-Weighting: Used to alleviate the jitter feeling caused by duplicate frames, the general value is 0.2, 0.5, 1.0 or higher for anime
  • Remove One Frame per Two: Recognize and change one frame every two frames in the animation to one frame per one
  • Remove One Frame per Two and One Frame per Three: Recognize and change one frame every three frames and two frames in the animation to one frame per one
  • First-Order Difference De-Weighting: Similar to removing one frame every two and one frame every three, but the de-weighting is more aggressive
  • Spatio-Temporal Resampling: If the input video frame rate is around 24 and there is only one frame every three at most, and there is no higher frame rate picture, the jitter of the anime video material can be completely removed
  • Cross Reconstruction: Similar to spatio-temporal resampling, the overall effect will be better. The input frame rate must be around 24, and the output frame rate can only be an integer multiple of the input frame rate, and is only used for specific models
  • Forward De-Weighting: Completely remove the jitter of the anime video material. If the frame rate of your input video is around 24, the default is 2, which means it can solve the problem of jitter caused by one frame every three or less
  • Smooth Difference: When the video has irregular duplicate frames (not sure how many frames are repeated), or when processing screen recording frame loss videos, this function can be used for a smooth.

Warning

Smooth Difference does not change the frame rate of the input video

For some videos with long sections of solid color scenes, this option may introduce unnecessary duplicate frames and cause audio-video asynchrony

Tips

Forward De-Weighting, Cross Reconstruction, Spatio-Temporal Resampling only support algorithms and models that can interpolate frames at any time
If you are not sure whether your video is one frame per two or one frame per three, please check One Frame per N Introductionopen in new window.

If the output video is still not smooth enough after using the de-weighting optimization, it may be due to the wrong transition detection, and the transition sensitivity threshold needs to be increased

Warning

Due to the limited ability of AI frame interpolation in anime frame interpolation at this stage, choosing de-weighting will increase the inter-frame motion range, resulting in picture distortion during frame interpolation. Please test and select the best de-weighting mode for each input video by controlling the variables multiple times.

It is recommended that you choose the de-weighting mode carefully. If you are frame interpolating the entire anime, it is recommended to turn on spatio-temporal linearization or not to turn on removing duplicate frames.

Frame interpolation effect after enabling video smoothness optimization (forward de-weighting)

Load the Graphics Card

Specify which graphics card to use for frame interpolation

Introduction to Frame Interpolation Algorithms

SVFI integrates several frame interpolation algorithms, such as RIFE, GMFSS, UMSS, etc.

These algorithms perform differently on different materials, and the algorithms and models for live-action and animation materials are respectively shown in Presets and the following introduction

Introduction to Frame Interpolation Models

Tips

Models with the ncnn prefix use ncnnopen in new window as the forward reasoning framework, which is compatible with NVIDIA GPUs and AMD GPUs, and models without this prefix cannot be used for AMD GPUs and core displays.

  • RIFE: High-speed, popular new era frame interpolation algorithm (the following is the model introduction)

2.3: Classic, popular model, fast speed, good effect.

4.6: The speed is more than twice as fast as 2.3, the effect is better, and it is recommended to use.

4.8: Anime material optimization model, the effect of interpolating anime is better, and the speed is the same as 4.6

4.9: Anime and live-action material optimization model, the effect of interpolating live-action is better, and the speed is the same

rpr_v7_2.3_ultra: Combined model, more adaptable to complex scenes.

rpr_v7_2.3_ultra#2: Combined model, more adaptable to complex scenes.

  • ncnn-rife: RIFE with support for various graphics card versions, good compatibility, fast speed, and slightly worse quality than RIFE.

  • ncnn_dain: Traditional old algorithm, can be used for both anime and live-action, supports any time, very slow speed, and very high smoothness.

  • GMFSS: Slow speed, super high quality (the following is the model introduction) (models with the trt mark are acceleration models)

pg104: The fourth-generation gmfss anime model, currently the most powerful anime frame interpolation model

union_v: The third-generation GMFSS model, with a stable structure and smooth pictures

basic: The first-generation gmfss model, very slow speed, and the effect may be more stable than up

Other Frame Interpolation Options Introduction

TTA Mode

Tips

This feature requires the purchase of the Professional DLCopen in new window.

Enabling this feature can reduce picture jellies, reduce subtitle jitter, and weaken the problem of object disappearance. Making the picture more smooth and comfortable

It takes extra frame interpolation time, and some frame interpolation models do not support this feature.

The larger the number behind, the slower, the less jellies, usually just fill in 1 or 2

Medium to, suitable for RIFE 2.3

Bidirectional Optical Flow

The speed is reduced by about half, and the effect of the RIFE 2.x series frame interpolation model may be slightly improved

The gmfss/umss model enables bidirectional optical flow to accelerate by 5%, the effect will not change, but it will increase the video memory usage

Dynamic Optical Flow Scale

Tips

This feature requires the purchase of the Professional DLCopen in new window.

During frame interpolation, the optical flow scale is dynamically selected, which can reduce the problem of object disappearance and reduce jellies (only applicable to RIFE 2.3 and RIFE 4.6)

Custom Preset Bar

Tips

This feature requires the purchase of the Professional DLCopen in new window.

Create a New Preset Based on the Current Settings

After naming the preset, click to create a new preset

Remove the Current Preset

Delete the currently selected preset

Apply the Specified Preset

Load the previously saved preset and automatically load the parameters

Toolbox

End Residual Processes

Will end all tasks, including tasks that open SVFI multiple times.

Tips

If you need to avoid ending the situation of opening SVFI multiple times, you need to manually end all SVFI CLI processes under the current SVFI process in the task manager. When enabling multi-threading, it is always recommended not to click the End Task button easily.

Convert Video to GIF Animation

Generate high-quality GIF animations

Usage example:

Input video path: E:\VIDEO\video.mp4

Output animation (gif) path: E:\GIF\video_gif_output.gif

Output frame rate: 30 fps

Tips

The output frame rate generally needs to be less than or equal to the frame rate of the original video, and it is not recommended to be higher than 30

Merge Existing Chunks

Merge scattered chunk fragments.

Tips

If the task fails during the final merge, you can directly select the task and click this button to complete the merge operation after adjusting the settings.

Audio and Video Merging

  • Fill in the complete path of the video (Example D:\01\myvideo.mp4)

  • Fill in the audio path of the video (Example D:\01\myvideo.aac), or use a video to input audio (Example D:\01\otherVideo.mp4)

  • Output video path (Example D:\01\output.mp4)

  • Secondary compression audio: Compress the audio to aac format, 640kbps

Export the Current Settings to a Text File

Export the settings information as an ini file, which can be shared with other users to contribute their settings. The usage method is to directly drag it into the software, and it will prompt that the preset has been successfully applied.
See Usage Tips for detailed usage.

Tips

If the video output of the software does not meet expectations, such as color cast, poor effect, etc., you can click this button and send the settings file to the developer to locate the problem.

Debug Mode

Output debug information during the task.

Warning

In some cases, this mode will add debug content to the picture and slow down the task processing speed.

So please turn off this option when processing tasks正式.

Left Title Bar Function

Settings

Main settings page

Preview

Output preview page

Tips

When previewing using the player interface, if the input is an HDR video, it is normal for the preview picture to be gray.

Status

View the program output information

User Page

View the software achievements and expandable or owned DLCs

Preference Settings

Rest Interval

Let the device rest for every X hours (temporarily pause the task)

Cache Folder

Specify the task folder to another location. The final output video will still be in the target folder

After the Task Runs

You can choose some automatic operations after the frame interpolation is completed

Force Exit

Default is enabled, the software forcibly ends the software process when an error occurs, avoiding residual processes

Tips

When you need to use the multi-instance or one_line_shot_args pipeline function of SVFI, it is recommended to turn off this option to avoid the instance being forcibly exited due to the forced exit of the software after it ends.

Enable Preview

Enable the preview window when frame interpolation

Auto Error Correction

Automatically modify settings to prevent task errors

Tips

Turning off automatic error correction can improve task initialization speed. It is recommended to turn off this option when processing queue tasks with stable settings.
In particular, if the encoding option or frame interpolation option setting value has an "AUTO" option, then even if this option is turned off, automatic error correction will be executed.
It is recommended to set all option values to non-AUTO values to completely turn off automatic error correction.

Custom output file name format

You can customize the output file name. The default value is {INPUT}-{RENDER}.{16BIT}.{DI}.{DN}.{FG}.{DB}.{DP}.{OCHDR}.{FN}.{FPS}.{VFI}.{DEDUP}.{SR}.{FP16}.{DEBUG}_{TASKID}{EXT}. The meanings of each abbreviation are as follows:

AbbreviationMeaning
Input file name
Enable only compression mode
Whether to enable high-precision mode
Whether to enable de-interlacing
Whether to enable noise reduction
Whether to enable fast noise addition
Whether to enable debanding
Whether to enable lens stabilization
Whether to enable one-click HDR
Whether to enable FMNet HDR
Output frame rate
VFI model used
Duplicate frame deduplication mode
Super-resolution model used
Whether to enable half precision
Debug mode
Task ID
Output file extension

Clear the Task List After Completion

Clear the input queue after all tasks in the list are completed

Mute Mode

Do not pop up windows and notifications

Window on top

Keep the window on top to avoid possible Windows scheduling performance loss.

Background Image

You can select pictures to enable custom backgrounds

Background Blur

The larger the value, the more blurred the background

Background Transparency

The larger the value, the higher the background brightness

Apply Theme

Change the theme of the application

Theme Color

Change the theme color of the application

Language

Set the preferred language for the user interface

Add to whitelist

Click the button to add the installation folder to the Windows Defender whitelist. This operation is ineffective for other anti-virus software.

Check installation file integrity

Click the button to check the file integrity through Steam at the next startup. It may be possible to fix some problems where the software cannot run normally due to software updates or incorrect settings.

Use Only CPU

Perform AI reasoning only using the CPU. Only applicable to devices without graphics cards.

Use All GPUs

Use all available GPUs for AI reasoning acceleration.

Warning

If the device has only one graphics card, please be sure to turn off this option.

TensorRT INT8 Quantization Function

Accelerate the running speed of TensorRT models, but it will take more time to compile TensorRT, and may cause the model effect to decline, please use this feature with caution.

The software will default to quantize the model for 750 rounds, and the number of quantizations can be adjusted in Advanced Related Settings. The processing time is long, and the acceleration may not be obvious on some devices.

Help

Learn about new features and useful tips of SVFI (shortcut operations, shortcut keys, etc.)

Provide Feedback

Provide feedback to help us improve SVFI

Privacy Protection Statement

Click the button to determine whether to send non-private diagnostic data to help us improve the software.

About

Software copyright and logs

Last update:
Contributors: Justin62628,DAMNCRAB