# My Deep Learning Workstation Setup

Reading Time: 9 minutes Read more…

Skip to main content# My Deep Learning Workstation Setup

# An Overview of Linear Regression Models

# Interactive Data Visualization in Python

# A Practical guide to Autoencoders

# Understanding Boosted Trees Models

# A Practical Guide to Tree Based Learning Algorithms

# Understanding Support Vector Machine via Examples

# Python Tutorial - Week 2

## SO WHAT DO YOU THINK ?

Lately, a lot of my friends have been asking about my deep learning workstation setup. In this post
I am going to describe my hardware, OS, and different packages that I use. In particular, based on
the question, I found that the most of the interest have been around managing different python
versions, and modules like pytorch/tensorflow libraries etc.

Reading Time: 9 minutes Read more…

Modeling the relationship between a scalar response (or dependent variable)
and one or more explanatory variables (or independent variables) is commonly
referred as a **regression** problem. The simplest model of such a
relationship can be described by a linear function - referred
as *linear regression*.

Reading Time: 16 minutes Read more…

There are two types of data visualizations: *exploratory* and *explanatory*.
Explanatory analysis is what happens when you have something specific you want
to show an audience. The aim of **explanatory** visualizations is to tell
stories - they’re carefully constructed to surface key findings.

Reading Time: 11 minutes Read more…

Usually in a conventional neural network, one tries to predict a target vector
$y$ from input vectors $x$. In an auto-encoder network, one tries to predict
$x$ from $x$. It is trivial to learn a mapping from $x$ to $x$ if the network
has no constraints, but if the network is constrained the learning process
becomes more interesting. In this article, we are going to take a detailed
look at the mathematics of different types of autoencoders (with different
constraints) along with a sample implementation of it using Keras,
with a tensorflow back-end.

Reading Time: 22 minutes Read more…

In the previous post, we learned about tree based learning methods - basics of tree based models and the use of bagging to reduce variance. We also looked at one of the most famous learning algorithms based on the idea of bagging- random forests. In this post, we will look into the details of yet another type of tree-based learning algorithms: boosted trees.

Reading Time: 26 minutes Read more…

Tree based learning algorithms are quite common in data science competitions.
These algorithms empower predictive models with high accuracy, stability and ease of
interpretation. Unlike linear models, they map non-linear relationships
quite well. Common examples of tree based models are:
decision trees,
random forest, and
boosted trees.

Reading Time: 25 minutes Read more…

In the previous post on Support Vector Machines (SVM), we looked at the mathematical details of the algorithm. In this post, I will be discussing the practical implementations of SVM for classification as well as regression.
I will be using the iris dataset as an example for the classification problem, and a randomly generated data as an example for the regression problem.

Reading Time: 12 minutes Read more…

In the Week 1 we got started with Python. Now that we can interact with python, lets dig deeper into it.

This week we will go over some additional fundamental things common in any program - interactive input from users, adding comments to your code, use of conditional logic i.e. `if - else`

conditions, loops, formatted output with strings and `print()`

statements.

Reading Time: 11 minutes Read more…

Would you like me to write on any particular topics in machine learning, deep learning? If you have any suggestions or comments, please leave me a note at any of the social links. You may also leave comments at any specific pages/posts.

Visit my Github Page