Details
Original language | English |
---|---|
Title of host publication | Image Analysis - 20th Scandinavian Conference |
Subtitle of host publication | SCIA 2017, Proceedings |
Editors | Puneet Sharma, Filippo Maria Bianchi |
Pages | 313-324 |
Number of pages | 12 |
Publication status | Published - 19 May 2017 |
Event | 20th Scandinavian Conference on Image Analysis, SCIA 2017 - Tromso, Norway Duration: 12 Jun 2017 → 14 Jun 2017 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 10269 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
Supervised machine learning is an important building block for many applications that involve data processing and decision making. Good classifiers are trained to produce accurate predictions on a training set while also generalizing well to unseen data. To this end, Bayes-Point-Machines (bpm) were proposed in the past as a generalization of margin maximizing classifiers, such as Support-Vector-Machines (svm). For bpms, the optimal classifier is defined as an expectation over an appropriately chosen posterior distribution, which can be estimated via Markov-Chain-Monte-Carlo (mcmc) sampling. In this paper, we propose three improvements on the original bpm classifier. Our new statistical model is regularized based on the sample size and allows for a true soft-margin formulation without the need to hand-tune any nuisance parameters. Secondly, this model can handle multi-class problems natively. Finally, our fast adaptive mcmc sampler uses Adaptive Direction Sampling (ads) and can generate a sample from the proposed posterior with a runtime complexity quadratic in the size of the training set. Therefore, we call our new classifier the Multi-class-Soft-margin-Bayes-Point-Machine (ms-bpm). We have evaluated the generalization capabilities of our approach on several datasets and show that our soft-margin model significantly improves on the original bpm, especially for small training sets, and is competitive with svm classifiers. We also show that class membership probabilities generated from our model improve on Platt-scaling, a popular method to derive calibrated probabilities from maximum-margin classifiers.
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Image Analysis - 20th Scandinavian Conference: SCIA 2017, Proceedings. ed. / Puneet Sharma; Filippo Maria Bianchi. 2017. p. 313-324 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10269 LNCS).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Soft Margin Bayes-Point-Machine Classification via Adaptive Direction Sampling
AU - Vogt, Karsten
AU - Ostermann, Jörn
N1 - Funding information: This work was supported by the German Science Foundation ( dfg ) under grant OS 295/4-1.
PY - 2017/5/19
Y1 - 2017/5/19
N2 - Supervised machine learning is an important building block for many applications that involve data processing and decision making. Good classifiers are trained to produce accurate predictions on a training set while also generalizing well to unseen data. To this end, Bayes-Point-Machines (bpm) were proposed in the past as a generalization of margin maximizing classifiers, such as Support-Vector-Machines (svm). For bpms, the optimal classifier is defined as an expectation over an appropriately chosen posterior distribution, which can be estimated via Markov-Chain-Monte-Carlo (mcmc) sampling. In this paper, we propose three improvements on the original bpm classifier. Our new statistical model is regularized based on the sample size and allows for a true soft-margin formulation without the need to hand-tune any nuisance parameters. Secondly, this model can handle multi-class problems natively. Finally, our fast adaptive mcmc sampler uses Adaptive Direction Sampling (ads) and can generate a sample from the proposed posterior with a runtime complexity quadratic in the size of the training set. Therefore, we call our new classifier the Multi-class-Soft-margin-Bayes-Point-Machine (ms-bpm). We have evaluated the generalization capabilities of our approach on several datasets and show that our soft-margin model significantly improves on the original bpm, especially for small training sets, and is competitive with svm classifiers. We also show that class membership probabilities generated from our model improve on Platt-scaling, a popular method to derive calibrated probabilities from maximum-margin classifiers.
AB - Supervised machine learning is an important building block for many applications that involve data processing and decision making. Good classifiers are trained to produce accurate predictions on a training set while also generalizing well to unseen data. To this end, Bayes-Point-Machines (bpm) were proposed in the past as a generalization of margin maximizing classifiers, such as Support-Vector-Machines (svm). For bpms, the optimal classifier is defined as an expectation over an appropriately chosen posterior distribution, which can be estimated via Markov-Chain-Monte-Carlo (mcmc) sampling. In this paper, we propose three improvements on the original bpm classifier. Our new statistical model is regularized based on the sample size and allows for a true soft-margin formulation without the need to hand-tune any nuisance parameters. Secondly, this model can handle multi-class problems natively. Finally, our fast adaptive mcmc sampler uses Adaptive Direction Sampling (ads) and can generate a sample from the proposed posterior with a runtime complexity quadratic in the size of the training set. Therefore, we call our new classifier the Multi-class-Soft-margin-Bayes-Point-Machine (ms-bpm). We have evaluated the generalization capabilities of our approach on several datasets and show that our soft-margin model significantly improves on the original bpm, especially for small training sets, and is competitive with svm classifiers. We also show that class membership probabilities generated from our model improve on Platt-scaling, a popular method to derive calibrated probabilities from maximum-margin classifiers.
UR - http://www.scopus.com/inward/record.url?scp=85020458313&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-59126-1_26
DO - 10.1007/978-3-319-59126-1_26
M3 - Conference contribution
AN - SCOPUS:85020458313
SN - 9783319591254
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 313
EP - 324
BT - Image Analysis - 20th Scandinavian Conference
A2 - Sharma, Puneet
A2 - Bianchi, Filippo Maria
T2 - 20th Scandinavian Conference on Image Analysis, SCIA 2017
Y2 - 12 June 2017 through 14 June 2017
ER -