Introduction to Machine Learning by WWCode Data Science, Part 5 Recap

Source: https://opencollective.com/wwcode

Women Who Code is a fantastic non-profit organization whose goal is to expose more women to tech related careers. Their chapter Women Who Code Data Science is putting on a 6 week introduction to machine learning course on Saturdays from 5/16/20 โ€“ 6/20/20. Each week, on the following Monday, I will be posting a recap of the course on my blog. Hereโ€™s the recap of Part 5; the full video is available on YouTube.

Part 5 Focus: Logistic Regression and Support Vector Machines

1. Linear Regression vs. Logistic Regression

  • The goal of linear regression is to identify the line that best fits a data set with the least amount of loss.
  • The goal of logistic regression is to identify the line that best seperates data into groups.
Photo Courtesy of WWCode Data Science

2. Logistic Regression

  • Logistic regression is a form of classification that predicts continuous values between 0 and 1 that translates to the likelihood that a data point belongs to a certain class.
  • This numerical value can be converted to a categorical value by considering it either true (the data point belongs to the class) or false (it does not belong to the class).

3. Types of Logistic Regression

  • Binary logistic regression: 2 possible outcomes (Approved or Denied for a loan)
  • Multinomial logistic regression: > 2 possible outcomes (Sedan, Truck, or SUV)
  • Ordinal logistic regression: > 2 possible outcomes where order matters (Place in a race from 1st to 7th)

3. Sigmoid Function

  • The logistic curve, which is often called the sigmoid curve, is represented by the following function:
Photo Courtesy of Analytics India Mag

4. Log Odds

  • The odds of something happening (which is the ratio of the probability of an event occuring and the probability of an event not occurring) is calculated by the following formula:
Photo Courtesy of LSM
  • The log odds is quite literally just applying a log transformation to the formula above, which helps model the probability of an event.

5. The Log Loss Function

  • Remember from last week that a loss function is a function used to evaluate performance of an algorithm.
  • The function used to calculate loss in logistic regression is as follows:
Photo Courtesy of WWCode Data Science

6. Decision Boundary

  • The decision boundary is the line that separates training examples into two or more distinct classes.
  • Some data is linearly separable (A) which means that the decision boundary is linear. This data can be predicted by logistic regression.
  • On the other hand, some data is non-linearly separable (B). Logistic regression cannot handle this type of data.
Photo Courtesy of WWCode Data Science
  • When using logistic regression, the goal is to iteratively test the decision boundary until you find the one with the smallest cost.

7. Support Vector Machines

  • Support vector machines (SVMs) are a class of algorithms that can be used for both classification and regression.
  • When using an SVM, we can take data sets with a non-linear decision boundary and add more dimensions to create a hyperplane decision boundary.
  • The goal of SVM is to find the decision hyperplane with the maximum margin, or distance between observed points and the hyperplane.

8. Kernels

  • Kernel methods are a class of algorithms for pattern analysis (such as clusters, rankings, or correlations).
  • They translate lower dimensional data to higher dimensions without ever computing the coordinates of the data in the higher dimensional space.
  • Kernel methods can be considered a “generalized dot product”.
  • Types of kernels include linear kernels, polynomial kernels, and gaussian kernels.
  • A linear kernel works well if your data is linearly separable (but in most of those cases, you can just use logistic regression!)
  • A polynomial kernel calculates the dot product of the kernel and raises it to some power, and is a very commonly used kernel.
  • Gaussian kernels depend on the points’ distance from the origin, or some other reference point.

We’ve now covered supervised learning, classification (part 1 and part 2), linear regression, logistic regression, and support vector machines. That’s a lot of great educational content. But, we’re not done yet! Women Who Code Data Science still has one more amazing week of learning left, and next Saturday it will be model evaluation.. and you can register for that awesome session here. Hope to see you there! ๐Ÿ™‚

Be the first to reply

Leave a Reply

Your email address will not be published. Required fields are marked *