I am trying to eliminate outliers in SQL Server 2008 by standard deviation. I would like only records that contain a value in a specific column within +/- 1 standard deviation of that column's mean.
How can I accomplish this?
I am trying to eliminate outliers in SQL Server 2008 by standard deviation. I would like only records that contain a value in a specific column within +/- 1 standard deviation of that column's mean.
How can I accomplish this?
There is an aggregate function called STDEV in SQL that will give you the standard deviation. This is the hard part- then just find the range between the mean and +/- one STDEV value.
This is one way you could go about doing it -
create table #test
(
testNumber int
)
INSERT INTO #test (testNumber)
SELECT 2
UNION ALL
SELECT 4
UNION ALL
SELECT 4
UNION ALL
SELECT 4
UNION ALL
SELECT 5
UNION ALL
SELECT 5
UNION ALL
SELECT 7
UNION ALL
SELECT 9
SELECT testNumber FROM #test t
JOIN (
SELECT STDEV (testnumber) as [STDEV], AVG(testnumber) as mean
FROM #test
) X on t.testNumber >= X.mean - X.STDEV AND t.testNumber <= X.mean + X.STDEV
If you are assuming a bell curve distribution of events, then only 68% of values will be within 1 standard deviation away from the mean (95% are covered by 2 standard deviations).
I would load a variable with the standard deviation of your range (derived using stdev / stdevp sql function) and then select the values that are within the appropriate number of standard deviations.
declare @stdtest table (colname varchar(20), colvalue int)
insert into @stdtest (colname, colvalue) values ('a', 2)
insert into @stdtest (colname, colvalue) values ('b', 4)
insert into @stdtest (colname, colvalue) values ('c', 4)
insert into @stdtest (colname, colvalue) values ('d', 4)
insert into @stdtest (colname, colvalue) values ('e', 5)
insert into @stdtest (colname, colvalue) values ('f', 5)
insert into @stdtest (colname, colvalue) values ('g', 7)
insert into @stdtest (colname, colvalue) values ('h', 9)
declare @std decimal
declare @mean decimal
declare @lower decimal
declare @higher decimal
declare @noofstds int
select @std = STDEV(colvalue), @mean = AVG(colvalue) from @stdtest
--68%
set @noofstds = 1
select @lower = @mean - (@noofstds * @std)
select @higher = @mean + (@noofstds * @std)
select @lower, @higher, * from @stdtest where colvalue between @lower and @higher
--returns rows with a colvalue between 3 and 7 inclusive
--95%
set @noofstds = 2
select @lower = @mean - (@noofstds * @std)
select @higher = @mean + (@noofstds * @std)
select @lower, @higher, * from @stdtest where colvalue between @lower and @higher
--returns rows with a colvalue between 1 and 9 inclusive
I'd be careful and think about what you're doing. Throwing away outliers might mean that you're discarding information that might not fit into a pre-conceived world view that could be quite wrong. Those outliers might be "black swans" that are rare, though not as rare as you'd think, and quite significant.
You give no context or explanation of what you're doing. It's easy to cite a function or technique that will fulfill the needs of your particular case, but I thought it appropriate to post the caution until additional information is supplied.