DSCo: a language modeling approach for time series classification

Daoyuan Li, Li Li, Tegawendé F. Bissyandé, Jacques Klein, Yves Le Traon

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearch

3 Citations (Scopus)

Abstract

Time series data are abundant in various domains and are often characterized as large in size and high in dimensionality, leading to storage and processing challenges. Symbolic representation of time series-which transforms numeric time series data into texts-is a promising technique to address these challenges. However, these techniques are essentially lossy compression functions and information are partially lost during transformation. To that end, we bring up a novel approach named Domain Series Corpus (DSCo), which builds per-class language models from the symbolized texts. To classify unlabeled samples, we compute the fitness of each symbolized sample against all per-class models and choose the class represented by the model with the best fitness score. Our work innovatively takes advantage of mature techniques from both time series mining and NLP communities. Through extensive experiments on an open dataset archive, we demonstrate that it performs similarly to approaches working with original uncompressed numeric data.

Original languageEnglish
Title of host publicationMachine Learning and Data Mining in Pattern Recognition
Subtitle of host publication12th International Conference, MLDM 2016 New York, NY, USA, July 16–21, 2016 Proceedings
EditorsPetra Perner
Place of PublicationCham Switzerland
PublisherSpringer
Pages294-310
Number of pages17
ISBN (Electronic)9783319419206
ISBN (Print)9783319419190
DOIs
Publication statusPublished - 2016
Externally publishedYes
EventInternational Conference on Machine Learning and Data Mining in Pattern Recognition 2016 - New York, United States of America
Duration: 16 Jul 201621 Jul 2016
Conference number: 12th
https://web.archive.org/web/20160304204738/http://www.mldm.de/

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume9729
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceInternational Conference on Machine Learning and Data Mining in Pattern Recognition 2016
Abbreviated titleMLDM 2016
CountryUnited States of America
CityNew York
Period16/07/1621/07/16
Internet address

Keywords

  • Language Model
  • Time Series Data
  • Symbolic Representation
  • Dynamic Time Warping
  • Alphabet Size

Cite this

Li, D., Li, L., Bissyandé, T. F., Klein, J., & Traon, Y. L. (2016). DSCo: a language modeling approach for time series classification. In P. Perner (Ed.), Machine Learning and Data Mining in Pattern Recognition: 12th International Conference, MLDM 2016 New York, NY, USA, July 16–21, 2016 Proceedings (pp. 294-310). (Lecture Notes in Computer Science ; Vol. 9729). Cham Switzerland: Springer. https://doi.org/10.1007/978-3-319-41920-6_22
Li, Daoyuan ; Li, Li ; Bissyandé, Tegawendé F. ; Klein, Jacques ; Traon, Yves Le. / DSCo : a language modeling approach for time series classification. Machine Learning and Data Mining in Pattern Recognition: 12th International Conference, MLDM 2016 New York, NY, USA, July 16–21, 2016 Proceedings. editor / Petra Perner. Cham Switzerland : Springer, 2016. pp. 294-310 (Lecture Notes in Computer Science ).
@inproceedings{43f59510002447f4a517b32d1c48be91,
title = "DSCo: a language modeling approach for time series classification",
abstract = "Time series data are abundant in various domains and are often characterized as large in size and high in dimensionality, leading to storage and processing challenges. Symbolic representation of time series-which transforms numeric time series data into texts-is a promising technique to address these challenges. However, these techniques are essentially lossy compression functions and information are partially lost during transformation. To that end, we bring up a novel approach named Domain Series Corpus (DSCo), which builds per-class language models from the symbolized texts. To classify unlabeled samples, we compute the fitness of each symbolized sample against all per-class models and choose the class represented by the model with the best fitness score. Our work innovatively takes advantage of mature techniques from both time series mining and NLP communities. Through extensive experiments on an open dataset archive, we demonstrate that it performs similarly to approaches working with original uncompressed numeric data.",
keywords = "Language Model, Time Series Data, Symbolic Representation, Dynamic Time Warping, Alphabet Size",
author = "Daoyuan Li and Li Li and Bissyand{\'e}, {Tegawend{\'e} F.} and Jacques Klein and Traon, {Yves Le}",
year = "2016",
doi = "10.1007/978-3-319-41920-6_22",
language = "English",
isbn = "9783319419190",
series = "Lecture Notes in Computer Science",
publisher = "Springer",
pages = "294--310",
editor = "Petra Perner",
booktitle = "Machine Learning and Data Mining in Pattern Recognition",

}

Li, D, Li, L, Bissyandé, TF, Klein, J & Traon, YL 2016, DSCo: a language modeling approach for time series classification. in P Perner (ed.), Machine Learning and Data Mining in Pattern Recognition: 12th International Conference, MLDM 2016 New York, NY, USA, July 16–21, 2016 Proceedings. Lecture Notes in Computer Science , vol. 9729, Springer, Cham Switzerland, pp. 294-310, International Conference on Machine Learning and Data Mining in Pattern Recognition 2016, New York, United States of America, 16/07/16. https://doi.org/10.1007/978-3-319-41920-6_22

DSCo : a language modeling approach for time series classification. / Li, Daoyuan; Li, Li; Bissyandé, Tegawendé F.; Klein, Jacques; Traon, Yves Le.

Machine Learning and Data Mining in Pattern Recognition: 12th International Conference, MLDM 2016 New York, NY, USA, July 16–21, 2016 Proceedings. ed. / Petra Perner. Cham Switzerland : Springer, 2016. p. 294-310 (Lecture Notes in Computer Science ; Vol. 9729).

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearch

TY - GEN

T1 - DSCo

T2 - a language modeling approach for time series classification

AU - Li, Daoyuan

AU - Li, Li

AU - Bissyandé, Tegawendé F.

AU - Klein, Jacques

AU - Traon, Yves Le

PY - 2016

Y1 - 2016

N2 - Time series data are abundant in various domains and are often characterized as large in size and high in dimensionality, leading to storage and processing challenges. Symbolic representation of time series-which transforms numeric time series data into texts-is a promising technique to address these challenges. However, these techniques are essentially lossy compression functions and information are partially lost during transformation. To that end, we bring up a novel approach named Domain Series Corpus (DSCo), which builds per-class language models from the symbolized texts. To classify unlabeled samples, we compute the fitness of each symbolized sample against all per-class models and choose the class represented by the model with the best fitness score. Our work innovatively takes advantage of mature techniques from both time series mining and NLP communities. Through extensive experiments on an open dataset archive, we demonstrate that it performs similarly to approaches working with original uncompressed numeric data.

AB - Time series data are abundant in various domains and are often characterized as large in size and high in dimensionality, leading to storage and processing challenges. Symbolic representation of time series-which transforms numeric time series data into texts-is a promising technique to address these challenges. However, these techniques are essentially lossy compression functions and information are partially lost during transformation. To that end, we bring up a novel approach named Domain Series Corpus (DSCo), which builds per-class language models from the symbolized texts. To classify unlabeled samples, we compute the fitness of each symbolized sample against all per-class models and choose the class represented by the model with the best fitness score. Our work innovatively takes advantage of mature techniques from both time series mining and NLP communities. Through extensive experiments on an open dataset archive, we demonstrate that it performs similarly to approaches working with original uncompressed numeric data.

KW - Language Model

KW - Time Series Data

KW - Symbolic Representation

KW - Dynamic Time Warping

KW - Alphabet Size

UR - http://www.scopus.com/inward/record.url?scp=84978976230&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-41920-6_22

DO - 10.1007/978-3-319-41920-6_22

M3 - Conference Paper

AN - SCOPUS:84978976230

SN - 9783319419190

T3 - Lecture Notes in Computer Science

SP - 294

EP - 310

BT - Machine Learning and Data Mining in Pattern Recognition

A2 - Perner, Petra

PB - Springer

CY - Cham Switzerland

ER -

Li D, Li L, Bissyandé TF, Klein J, Traon YL. DSCo: a language modeling approach for time series classification. In Perner P, editor, Machine Learning and Data Mining in Pattern Recognition: 12th International Conference, MLDM 2016 New York, NY, USA, July 16–21, 2016 Proceedings. Cham Switzerland: Springer. 2016. p. 294-310. (Lecture Notes in Computer Science ). https://doi.org/10.1007/978-3-319-41920-6_22