Outliers have less influence in SVM Algorithm therefore there are less chances of skewing the results as outliers affect the mean of the data and therefore mean cannot represent the data set which it was able to do before the effect of having outliers ,Thus as there is less influence of outliers in SVM ,it proves to be helpful.Īs In SVM, the Classifier is dependent ideally only on a subset of points, while maximizing distance between closest points of two classes (Margin), So We do not need to take care and take into account all the points but taking taking only subset of points become helpful In SVM, we can due to the large margin that it likes to generate, we can fit in more data and classify it perfectly. SVM has a nature of Convex Optimization which is very helpful as we are assured of optimality in results So the answer would be global minimum instead of a local minimum. Support Vector Machine is useful in finding the separating Hyperplane ,Finding a hyperplane can be useful to classify the data correctly between different groups. The other important advantage of SVM Algorithm is that it is able to handle High dimensional data too and this proves to be a great help taking into account its usage and application in Machine learning field. Due to this as it performs well on out of generalization sample data SVM proves itself to be fast as the sure fact says that in SVM for the classification of one sample, the kernel function is evaluated and performed for each and every support vectors. SVM performs and generalized well on the out of sample data. SVM can be used when total no of samples is less than the no of dimensions and performs well in terms of memory. SVM generally do not suffer condition of overfitting and performs well when there is a clear indication of separation between classes. K(x1, x2)=〈f(x1), f(x2)〉Where K is the kernel function, x1, x2 are n-dimensional inputs and f is a function that is used to map n-dimensional space into m-dimensional space and 〈x1, x2〉is used to specify/indicate the dot product In Classification problems, there is a strong assumption that is Data have samples that are linearly separable but with the introduction of kernel, Input data can be converted into High dimensional data avoiding the need of this assumption. Kernel provides choosing a function which is not necessarily linear and can have different forms in terms of different data it operates on and thus is a non-parametric function. The SVM provides a very useful technique within it known as kernel and by the application of associated kernel function we can solve any complex problem. It can be used for the data such as image, text, audio etc.It can be used for the data that is not regularly distributed and have unknown distribution. SVM is very helpful method if we don’t have much idea about the data. This article will give an idea about its advantages in general. SVM is one of the supervised algorithms mostly used for classification problems.
0 Comments
Leave a Reply. |