2015 | OriginalPaper | Chapter Open Access
Hint
Swipe to navigate through the chapters of this book
Published in:
Efficient Learning Machines
This chapter covers details of the support vector machine (SVM) technique, a sparse kernel decision machine that avoids computing posterior probabilities when building its learning model. SVM offers a principled approach to problems because of its mathematical foundation in statistical learning theory. SVM constructs its solution in terms of a subset of the training input. SVM has been extensively used for classification, regression, novelty detection tasks, and feature reduction. This chapter focuses on SVM for supervised classification tasks only, providing SVM formulations for when the input space is linearly separable or linearly nonseparable and when the data are unbalanced, along with examples. The chapter also presents recent improvements to and extensions of the original SVM formulation. A case study concludes the chapter.
Our product recommendations
1
Established in the 1970s, Lagrangian relaxation provides bounds for the branch-and-bound algorithm and has been extensively used in scheduling and routing. Lagrangian relaxation converts many hard integer-programming problems into simpler ones by emphasizing the constraints in the objective function for optimization via Lagrange multipliers. (For a more in-depth discussion on Lagrangian relaxation, see Fisher 2004.)
- Title
- Support Vector Machines for Classification
- DOI
- https://doi.org/10.1007/978-1-4302-5990-9_3
- Authors:
-
Mariette Awad
Rahul Khanna
- Publisher
- Apress
- Sequence number
- 3
- Chapter number
- Chapter 3