views:

74

answers:

1

Hi everyone, i'm studying simple machine learning algorithms, beginning with a simple gradient descent, but i've got some trouble trying to implement it in python.

here is the example i'm trying to reproduce, i've got data about houses with the (living area (in feet2), and number of bedrooms) with the resulting price :

Living area (feet2) : 2104

#bedrooms : 3

Price (1000$s) : 400

i'm trying to do a simple regression using the gradient descent method, but my algorithm won't work... The form of the algorithm is not using vectors on purpose (i'm trying to understand itstep by step).

i = 1
import sys
derror=sys.maxint
error = 0
step = 0.0001
dthresh = 0.1
import random

theta1 = random.random()
theta2 = random.random()
theta0 = random.random()
while derror>dthresh:
    diff = 400 - theta0 - 2104 * theta1 - 3 * theta2
    theta0 = theta0 + step * diff * 1
    theta1 = theta1 + step * diff * 2104
    theta2 = theta2 + step * diff * 3
    hserror = diff**2/2
    derror = abs(error - hserror)
    error = hserror
    print 'iteration : %d, error : %s' % (i, error)
    i+=1

I understand the math, i'm constructing a predicting function $$h_{\theta}(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2$$ with $x_1$ and $x_2$ being the variables (living area, number of bedrooms) and $h_{\theta}(x)$ the estimated price.

I'm using the cost function ($hserror$) (for one point) : $$hserror = \frac{1}{2} (h_{\theta}(x) - y)^2$$ This is a usual problem, but i'm more of a software engineer and i'm learning one step at a time, can you tell me what's wrong ?

Thank you for your time.


I got it working with this code :

data = {(2104, 3) : 400, (1600,3) : 330, (2400, 3) : 369, (1416, 2) : 232, (3000, 4) : 540}
for x in range(10):
    i = 1
    import sys
    derror=sys.maxint
    error = 0
    step = 0.00000001
    dthresh = 0.0000000001
    import random

    theta1 = random.random()*100
    theta2 = random.random()*100
    theta0 = random.random()*100
    while derror>dthresh:
        diff = 400 - (theta0 + 2104 * theta1 + 3 * theta2)
        theta0 = theta0 + step * diff * 1
        theta1 = theta1 + step * diff * 2104
        theta2 = theta2 + step * diff * 3
        hserror = diff**2/2
        derror = abs(error - hserror)
        error = hserror
        #print 'iteration : %d, error : %s, derror : %s' % (i, error, derror)
        i+=1
    print ' theta0 : %f, theta1 : %f, theta2 : %f' % (theta0, theta1, theta2)
    print ' done : %f' %(theta0 + 2104 * theta1 + 3*theta2)

which ends up with answers like this :

 theta0 : 48.412337, theta1 : 0.094492, theta2 : 50.925579
 done : 400.000043
 theta0 : 0.574007, theta1 : 0.185363, theta2 : 3.140553
 done : 400.000042
 theta0 : 28.588457, theta1 : 0.041746, theta2 : 94.525769
 done : 400.000043
 theta0 : 42.240593, theta1 : 0.096398, theta2 : 51.645989
 done : 400.000043
 theta0 : 98.452431, theta1 : 0.136432, theta2 : 4.831866
 done : 400.000043
 theta0 : 18.022160, theta1 : 0.148059, theta2 : 23.487524
 done : 400.000043
 theta0 : 39.461977, theta1 : 0.097899, theta2 : 51.519412
 done : 400.000042
 theta0 : 40.979868, theta1 : 0.040312, theta2 : 91.401406
 done : 400.000043
 theta0 : 15.466259, theta1 : 0.111276, theta2 : 50.136221
 done : 400.000043
 theta0 : 72.380926, theta1 : 0.013814, theta2 : 99.517853
 done : 400.000043
+3  A: 

First issue is that running this with only one piece of data gives you an underdetermined system... this means it may have an infinite number of solutions. With three variables, you'd expect to have at least 3 data points, preferably much higher.

Secondly using gradient descent where the step size is a scaled version of the gradient is not guaranteed to converge except in a small neighbourhood of the solution. You can fix that by switching to either a fixed size step in the direction of the negative gradient (slow) or a linesearch in the direction of the negative gradient ( faster, but slightly more complicated)

So for fixed step size instead of

theta0 = theta0 - step * dEdtheta0
theta1 = theta1 - step * dEdtheta1
theta2 = theta2 - step * dEdtheta2

You do this

n = max( [ dEdtheta1, dEdtheta1, dEdtheta2 ] )    
theta0 = theta0 - step * dEdtheta0 / n
theta1 = theta1 - step * dEdtheta1 / n
theta2 = theta2 - step * dEdtheta2 / n

It also looks like you may have a sign error in your steps.

I'm also not sure that derror is a good stopping criteria. (But stopping criteria are notoriously hard to get "right")

My final point is that gradient descent is horribly slow for parameter fitting. You probably want to use conjugate-gradient or Levenberg-Marquadt methods instead. I suspect that both of these methods already exist for python in the numpy or scipy packages (which aren't part of python by default but are pretty easy to install)

Michael Anderson
Thank you for your great answer ! i know that it's not a great approach of the problem, i wanted to try implement this simple solution first and then use a variable step and try both "batch gradient descent" and "stochastic gradient descent".
ssaboum
Just to be sure what is the expression you use for dEdtheta ?
ssaboum
I'd take d = 400 - theta0 - 2104 * theta1 - 3 * theta2, E=d^2, dEdtheta0 = 2 * d * (-1), dEdtheta1 = 2 * d * (-2104), dEdtheta2= 2*d*(-3). Which would make the sign in your original equations correct. But if you look at the size of the gradients, they are huge compared to the 0.0001 scale factor, which means you end up taking step sizes that are too large from your starting point. Normalising the gradient, or limiting the step side in some other manner, should solve your issue.
Michael Anderson
i tried setting up the step to 0.00000000001 and now the error is slowly decreasing, but the final answer for thetas always end up as (0, 0, 0)...
ssaboum
That shouldn't be the case as at (0,0,0) you should have diff = 400, so all the thetas should increase at the end of that step. (though it may take a ridicoulously long time - with your step size of 1e-9, you'll only be moving by 1e-6 or so - this is why I suggest you normalise the step size in some way)
Michael Anderson