A blind deconvolution model for scene text detection and recognition in video

Vijeta Khare, Palaiahnakote Shivakumara, Paramesran Raveendran, Michael Blumenstein

Research output: Contribution to journalArticleResearchpeer-review

40 Citations (Scopus)

Abstract

Text detection and recognition in poor quality video is a challenging problem due to unpredictable blur and distortion effects caused by camera and text movements. This affects the overall performance of the text detection and recognition methods. This paper presents a combined quality metric for estimating the degree of blur in the video/image. Then the proposed method introduces a blind deconvolution model that enhances the edge intensity by suppressing blurred pixels. The proposed deblurring model is compared with other state-of-the-art models to demonstrate its superiority. In addition, to validate the usefulness and the effectiveness of the proposed model, we conducted text detection and recognition experiments on blurred images classified by the proposed model from standard video databases, namely, ICDAR 2013, ICDAR 2015, YVT and then standard natural scene image databases, namely, ICDAR 2013, SVT, MSER. Text detection and recognition results on both blurred and deblurred video/images illustrate that the proposed model improves the performance significantly.

Original languageEnglish
Pages (from-to)128-148
Number of pages21
JournalPattern Recognition
Volume54
DOIs
Publication statusPublished - 1 Jun 2016
Externally publishedYes

Keywords

  • Alternative minimization
  • Blind deconvolution
  • Text detection
  • Text recognition
  • Text restoration

Cite this