BlockMatching() class is written in Python. It takes 5 arguments:
- dfd : {0:MAD, 1:MSE} Displaced frame difference
- blockSize : (sizeH,sizeW)
- searchMethod : {0:Exhaustive, 1:Three-Step}
- searchRange : (int) +/- pixelwise range
- motionIntensity: True (default)
Normalization for motion vector intensities. Assigns 255 to the largest amplitude motion vector. Also, there is a threshold that intensity value cannot be less than 100.
- predict_from_prev : False (default)
- N: 5 (default)
If the predictions made from previous predicted frames, anchor is updated after N frames.
Initialize an object of BlockMatching() class with an arbitrary name, bm() is suggested.
bm = BlockMatching(dfd=dfd,
blockSize=blockSize,
searchMethod=searchMethod,
searchRange=searchRange,
motionIntensity=False)
Then use step() method to run the program. After execution, you can reach the generated motion field via bm.motionField and the predicted anchor bm.anchorP properties.
bm.step(anchor,target)
motionField = bm.motionField
anchorP = bm.anchorP
Use visualize() method of Video() class to put the 4 frames together to show a collage. Here, a and t arguments are the frame numbers of the anchor and target frames, respectively.
collage = video.visualize(anchor,target,motionField,anchorP,text,a,t)