Interpreting regression coefficients in linear regression
The idea with linear regression is that we fit some sort of line to our data.
By virtue of this process, one of the first questions we face is 'What do the parameters alpha and beta represent?'
With reference to the simple bivariate case:
- alpha (α) represents the value of Y if X was 0.
- Beta (β) represents the gradient of our line. (Think about y=mx+c)
The above is pretty straightforward, but what happens when we introduce a third variable in our regression?
In this context, we are now working with two independent variables and one dependent variable.
We can call our independent variables X1 and X2 such that:
Least squares will place a line which minimises the sum of the squared distances of each of the points from the line.
In most situations, we wont be dealing with just one or two explanatory variables. Typically, we have a whole host of explanatory variables to consider.
How can we now think about multiple regression given this new information?
If we think about the situation pragmatically, we have now run out of spatial dimensions to graph any more variables. However, we can still think about what the individual β1 to βn represent.
We can say that β1 represents the marginal effect of having one more unit of X1 on Y.
Why does it represent the marginal effect?
We can think about it in terms of having one more unit of X1 whilst everything else is held constant.
We can also show this through partial differentiation:
This is why β1 represents the "partial effect" in econometrics.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home