Shape context based video texture synthesis from still images

  • Chao Yin*
  • , Yan Gui
  • , Zhifeng Xie
  • , Lizhuang Ma
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

A video texture provides a continuous varying stream of images. In this paper, we introduce a new algorithm that allows the user to easily produce a visually plausible video texture from a small collection of still images. In order to approximate the original temporal order of the scene, we apply a shape context based order recovery algorithm to the still images. The frame sequence is subsequently generated using a second order Markov Chain model. The final video texture is generated by frame interpolation which utilizes thin-plate spline warping and inverse distance weighting interpolation techniques. In conclusion, we show that significant improvements are obtained by using TPS warping technique instead of traditional optical flow method.

Original languageEnglish
Title of host publicationProceedings - 2011 International Conference on Computational and Information Sciences, ICCIS 2011
Pages38-42
Number of pages5
DOIs
StatePublished - 2011
Externally publishedYes
Event2011 International Conference on Computational and Information Sciences, ICCIS 2011 - Chengdu, Sichuan, China
Duration: 21 Oct 201123 Oct 2011

Publication series

NameProceedings - 2011 International Conference on Computational and Information Sciences, ICCIS 2011

Conference

Conference2011 International Conference on Computational and Information Sciences, ICCIS 2011
Country/TerritoryChina
CityChengdu, Sichuan
Period21/10/1123/10/11

Fingerprint

Dive into the research topics of 'Shape context based video texture synthesis from still images'. Together they form a unique fingerprint.

Cite this