This post will cover an experiment in Azure Machine learning. The experiment is based on an experiment from the book Microsoft Azure Essentials Azure Machine Learning.

 

There are several different categories of machine learning algorithms in Azure Machine Learning toolkit. We will list them below:

  • Classification algorithms

These are used to classify data into different categories that can then be used to predict one or more discrete variables, based on the other attributes in the dataset.

  • Regression algorithms

These are used to predict on or more continuous variables, such as profit or loss, based on other attributes in the dataset.

  • Clustering algorithms

These determine natural groupings and patterns in datasets and are used to predict grouping classifications for a given variable.

 

Supervised learning

  • Classification and Regression ( Input data is know, output is know)

Unsupervised learning

  • Clustering

Workflow for supervised learning

ML experiment workflow

Summary

Azure Machine Learning provides a way of applying historical data to a problem by creating a model and using it to successfully predict future behaviors or trends. We have briefly touched the continuous cycle of predictive model creation, model evaluation, model deployment, and the testing and feedback loop.

The primary predictive analytics algorithms currently used in Azure Machine Learning are classification, regression, and clustering.

 

 

Experiment

Now we will try our own experiment. We will use data from a public repository UCI Machine Learning Repository, the data is a Census Income Dataset. The dataset is a 15 x 32526 matrix, the column income is the value that we are going to try and predict, the prediction will be based on the other 14 attributes, more on those later.                                                                                                                                     http://archive.ics.uci.edu/ml/datasets/Census+Income

First step is to upload the dataset to Azure ML, click on new dataset and then upload.

thML experiment

 

 

Second, we create a new Azure ML experiment.

Experiment screenshot

 

Once the data is uploaded and you have your experiment created we can have a first glimpse of the data. Drag the data from the left panel to the workspace in the middle. We can easily visualize the data set by right-click and then visualize. It’s always a nice to get a first feeling of the data.

Visualize data

I will now let you experiment on your own and jump straight ahead to the finished model. Below you can see a screenshot from the workspace which includes all of the different step. The algorithm that was choosen for the first run is a Two Class Boosted Decision Tree.

full model

Let us look at the result afte running the experiment. We right-click on the Evaluate-Model block and press visualize.

In general, classification models are evaluated according to these metrics.

  • Accuracy measures the goodness of a classification model as the proportion of true results to total cases.
  • Precision is the proportion of true results over all positive results.
  • Recall is the fraction of all correct results returned by the model.
  • F-score is computed as the weighted average of precision and recall between 0 and 1, where the ideal F-score value is 1.
  • AUC measures the area under the curve plotted with true positives on the y axis and false positives on the x axis. This metric is useful because it provides a single number that lets you compare models of different types.
  • Average log loss is a single score used to express the penalty for wrong results. It is calculated as the difference between two probability distributions – the true one, and the one in the model.
  • Training log loss is a single score that represents the advantage of the classifier over a random prediction.

We are presented with the ROC curve of the model,

Receiver Operator Characteristic (ROC) curves This format displays the fraction of true positives out of the total actual positives. It contrasts this with the fraction of false positives out of the total negatives, at various threshold settings. The diagonal line represents 50 percent accuracy in your predictions and can be used as a benchmark that you want to improve. The higher and further to the left, the more accurate the model is.

the straight line shows a model with a 50 % chance of prediciting the right value, we want our curve to be above the straight line, as you can see our model lies above.ROC curve

Furthermore, we can look at some more result from our model if we scroll down below the ROC curve. See the table below.

 

 

data result

So, that was the first run of our model. If we are happy with the outcome we can move on to the next step, If not, we can either try the same model again with different parameters or try a completely different model.

Im satisified at the moment so I feel confident in moving to the next step which will be to set up a web service and publish our model.

See you in the next blogpost!

 

Annonser