views:

286

answers:

3

I am trying to eliminate outliers in SQL Server 2008 by standard deviation. I would like only records that contain a value in a specific column within +/- 1 standard deviation of that column's mean.

How can I accomplish this?

+3  A: 

There is an aggregate function called STDEV in SQL that will give you the standard deviation. This is the hard part- then just find the range between the mean and +/- one STDEV value.

This is one way you could go about doing it -

    create table #test
(
   testNumber int
   )

   INSERT INTO #test (testNumber)
   SELECT  2
   UNION ALL 
   SELECT 4
   UNION ALL 
   SELECT 4
   UNION ALL 
   SELECT 4
   UNION ALL 
   SELECT 5
   UNION ALL 
   SELECT 5
   UNION ALL 
   SELECT 7
   UNION ALL 
   SELECT 9

   SELECT testNumber FROM #test t
   JOIN (
    SELECT STDEV (testnumber) as [STDEV], AVG(testnumber) as mean
    FROM #test
        ) X on t.testNumber >= X.mean - X.STDEV AND t.testNumber <= X.mean + X.STDEV
Mike M.
Note that above the STDEV is a little over two and the query will return values 4, 4, 4, 5, 5, 7
Mike M.
+2  A: 

If you are assuming a bell curve distribution of events, then only 68% of values will be within 1 standard deviation away from the mean (95% are covered by 2 standard deviations).

I would load a variable with the standard deviation of your range (derived using stdev / stdevp sql function) and then select the values that are within the appropriate number of standard deviations.

declare @stdtest table (colname varchar(20), colvalue int)

insert into @stdtest (colname, colvalue) values ('a', 2)
insert into @stdtest (colname, colvalue) values ('b', 4)
insert into @stdtest (colname, colvalue) values ('c', 4)
insert into @stdtest (colname, colvalue) values ('d', 4)
insert into @stdtest (colname, colvalue) values ('e', 5)
insert into @stdtest (colname, colvalue) values ('f', 5)
insert into @stdtest (colname, colvalue) values ('g', 7)
insert into @stdtest (colname, colvalue) values ('h', 9)

declare @std decimal
declare @mean decimal
declare @lower decimal
declare @higher decimal
declare @noofstds int

select @std = STDEV(colvalue), @mean = AVG(colvalue) from @stdtest

--68%
set @noofstds = 1
select @lower = @mean - (@noofstds * @std)
select @higher = @mean + (@noofstds * @std)

select @lower, @higher, * from @stdtest where colvalue between @lower and @higher

--returns rows with a colvalue between 3 and 7 inclusive

--95%
set @noofstds = 2
select @lower = @mean - (@noofstds * @std)
select @higher = @mean + (@noofstds * @std)

select @lower, @higher, * from @stdtest where colvalue between @lower and @higher

--returns rows with a colvalue between 1 and 9 inclusive
amelvin
99% are covered by plus or minus three standard deviations, but it's only valid if your population is truly represented by a normal distribution. Values outside the six sigma range are presumed rare, but in real life that's often not the case.
duffymo
@duffymo I agree and I agree with your comments below (+1) and that is why I was careful to mention a 'bell curve distribution of events' - a normal distribution. If the distribution is skewed then obviously this changes things.
amelvin
But the normal distribution is always an idealization, because you can never tell when a black swan outlier is coming along next.
duffymo
@duffymo there are enough black sheep on SO without black swans joining them!
amelvin
Tell that to Taleb: http://www.fooledbyrandomness.com/
duffymo
A: 

I'd be careful and think about what you're doing. Throwing away outliers might mean that you're discarding information that might not fit into a pre-conceived world view that could be quite wrong. Those outliers might be "black swans" that are rare, though not as rare as you'd think, and quite significant.

You give no context or explanation of what you're doing. It's easy to cite a function or technique that will fulfill the needs of your particular case, but I thought it appropriate to post the caution until additional information is supplied.

duffymo