Finding outpoints in noisy binary sequences- a revised empirical evaluation

Murlikrishna Viswanathan, Chris S. Wallace, David L. Dowe, Kevin B. Korb

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

11 Citations (Scopus)

Abstract

Kearns et al. (1997) in an earlier paper presented an empirical evaluation of model selection methods on a specialized version of the segmentation problem. The inference task was the estimation of a predefined Boolean function on the real interval [0,1] from a noisy random sample. Three model selection methods based on the Guaranteed Risk Minimization, Minimum Description Length (MDL) Principle and Cross Validation were evaluated on samples with varying noise levels. The authors concluded that, in general, none of the methods was superior to the others in terms of predictive accuracy. In this paper we identify an inefficiency in the MDL approach as implemented by Kearns et al. and present an extended empirical evaluation by including a revised version of the MDL method and another approach based on the Minimum Message Length (MML) principle.

Original languageEnglish
Title of host publicationAdvanced Topics in Artificial Intelligence - 12th Australian Joint Conference on Artificial Intelligence, AI 1999, Proceedings
EditorsNorman Foo
PublisherSpringer
Pages405-416
Number of pages12
ISBN (Print)3540668225, 9783540668220
DOIs
Publication statusPublished - 1999
EventAustralasian Joint Conference on Artificial Intelligence 1999 - Sydney, Australia
Duration: 6 Dec 199910 Dec 1999
Conference number: 12th
https://link.springer.com/book/10.1007/3-540-46695-9 (Proceedings)

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume1747
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceAustralasian Joint Conference on Artificial Intelligence 1999
Abbreviated titleAI 1999
CountryAustralia
CitySydney
Period6/12/9910/12/99
Internet address

Cite this