Web Traffic using Linear Modeling


Wanted to illustrate a simple example to understand rate of change of web traffic over time using linear regression. My data is web traffic hits by day for past 8 months, here is top few rows:

date ,visits
10/11/14 ,37896
10/12/14 ,24098
10/13/14 ,35550
10/14/14 ,38610
10/15/14 ,35739
10/16/14 ,30316
…. through May 2015

First, I want to plot the data and add line of best fit:
plot(data$date, data$visits,pch=19,col="blue",main="Web Traffic", xlab="Date",ylab="Visits")
lm1 <- lm(data$visits ~ data$date) abline(lm1,col="red",lwd=3)


#(Intercept) data$date
#-2404.5259 148.9

To interpret this model, would be that we see 149 additional hits each day.

That model was great for absolute increase, but what if we want to average increase. To do so we can run the linear regression using log:

(Intercept) data$date
0.00000 1.00322

To interpret, would be a 0.3% increase in web traffic per day.

And other way we could look at change per day would be a generalized linear model with poisson.
plot(data$date, data$visits,pch=19,col="green",xlab="Date",ylab="Visits")
glm1 <- glm(data$visits ~ data$date, family="poisson") abline(lm1,col="red",lwd=3) # for linear model line lines(data$date,glm1$fitted,col="blue",lwd=3) # lm fit for possion


confint(glm1,level=0.95) # CI
#2.5 % 97.5 %
#(Intercept) -55.999943551 -45.190626728
#data$date 0.002976299 0.003632503

To interpret, 95% confident the increase web hits/day falls between range of 0.003 and 0.004, which is right inline with previous method of using linear regression log.

Convert Tab-Delimited to CSV


This is a very simple exercise, but necessary at times in Data Science.

f = open("input_data.txt") # input file tab delimited
f.readline() # skip the first line if needed for header removal
for line in f:
mystring = line.replace("\t", ",")
print('file created successfully')

Probability of Web Clicks in a Day


Below is a simplified example using R in which you can apply a probability that a day has a certain # of visits. The web visits are approx normally distributed, and we want to know the probability of getting fewer than 50 visits/day.

# web traffic for last seven days
web_visits <- c(64, 34, 55, 47, 52, 59, 77) visits_day <- mean(web_visits) # mean = 55.4 sd_visits_day <- sd(web_visits) # standard_deviation = 13.5 goal_visits <- 50 #result pnorm(visits_day, goal_visits, sd=sd_visits_day, lower.tail=F) # .344 or 34.4% probability you'll have fewer than 50 web visits

source: Statistical Inference, John Hopkins University/Coursera by Brian Caffo

Combine .txt Files in Python


Using the HDFS (Hadoop File System), I was able to save data from a query which the hopes of using for analysis.  From there, I moved the files using SCP to my local machine. However, I was dealing with over 700 .txt files that would be needed to be combined.  Looking at the file names,  they are is this format “000000_0” to “000770_0”.   In Unix, the simple way is to use a command such as “cat * > new_file_name”, which will combine all files.   But there could be times you don’t want all files in a directory to be combined, or need some sort of logic applied.  Using Python, here is my code to make it happen:

# create a empty file called "combined.csv" using # a text editor
# first file:
for line in open("000000_0"):

# files with 1 digit:
for num in range(1,9):
f = open("00000"+str(num)+"_0")

# f.next() # skip the header
for line in f:
f.close() # not really needed

# files with 2 digits:
for num in range(10,99):
f = open("0000"+str(num)+"_0")
# f.next() # skip the header
for line in f:
f.close() # not really needed

# files with 3 digits:
for num in range(100,769):
f = open("000"+str(num)+"_0")
# f.next() # skip the header
for line in f: