Crowd-sourced Recommender Demo

Standard

Recommender Demo – click here!

This demo of a recommender is to illustrate an example of how a website (online music, e-commerce, news) generates recommendations to increase engagement and conversions.

This is not production ready, merely a POC of how it works.

* user selects favorite activities
* data is passed to server and processed in hadoop
* user can go to results page and select an activity to get recommendations

At this point, an auto-workflow has not been built, so there are a series of steps to create the new dataset. Here are the general steps:

1. user data feeds into database via website (which is used in generating recommendations)
2. data is moved and process in Hadoop
3. data is moved to MySQL, accessible using PHP
4. user selects an activity, and the crowd-sourced recommendations are displayed

Example: How Crowd-Sourcing Works (co-occurrence recommendations) Using Activities

All Users Activity History
| Activity | Art Fair | Fishing | Shovel Snow | Wedding |
| Jon          | Yes           | Yes         | Yes                      | No              |
| Jane        | No            | Yes         | No                        | Yes            |
| Jill            | Yes           | Yes         | No                        | Yes            |

A New User like to go to Weddings, and we need to recommend them other activities:
* Find Wedding in History Matrix who also enjoyed Wedding to it: U{Jane, Jill}
* Identify other activities same users (U) enjoyed, and rank by count

Recommendation
| Activity | Rank | Count of User (co-occurrence |
| Fishing  |  1         |  2                                                               |
| Art Fair |  2         | 1                                                                |

Predictive Algorithms on Million Song Dataset

Standard

I’ve had the opportunity within a Data Mining course in my graduate Software Engineering program to be part of a project in which we were to create a “recommendation engine”. The dataset we used was called the which there are 1M songs, along with play history of 380k users.

The goal was to provide a recommendation (ranked 1-10) of songs based on a current song played. We used three algorithms, Association Rules, Naive Bayes, and user-user co-occurance. When tested, the results were mixed, with Association Rules providing the top F1 scores, but also had the lowest # of recommendations (for a large portion of songs had less than 10 songs recommended). Co-occurance was close behind with the 2nd best F1 score, and provided the largest output of songs, as well as the lowest requirement of computational requirements.

Here is the full project on github.

What the heck is Mahout?

Standard

Here is the tutorial I used by Steve Cook on youtube.

Links to downloads libraries for java:

http://mahout.apache.org/general/downloads.html

http://www.slf4j.org/download.html

Here is the data:

MovieLens

https://code.google.com/p/guava-libraries/

The basics of Mahout (which is an Apache product) is to accomplish the following:

  • Collaborative Filtering (recommendations)
  • Classification (spam email or not)
  • Clustering (Google news)

Getting Started with Hadoop

Standard

To begin playing around with what Hadoop does, I decided to go down the path of using HortonWorks Sandbox.  One of the first things the setup has you do, is install Oracle VirtualBox, which is a virtual machine.  Within that virtual machine is where the Sandbox will run.  One note, the browser IP is wrong in the tutorial, it should be http://127.0.0.1:8000 to open the Sandbox GUI.

I then proceeded to follow the “Hello World” tutorial with I was able to import some actual data from the NYSE and run some Hive and Pig queries.  I have a substantial SQL background (but is not essential) so it was a breeze.

I’m impressed on how easy and well written the tutorial was.  Great way to get started!