If I'm understanding the question correctly, this isn't super difficult to do, but you need to know the possible range of your input values before you can determine the normalized output values.
For instance if your input values can range from -500 to +500, then a simple function like input/500.0 would do the trick.
I'm going to go out on a limb and assume you're wanting whatever range of values you have to be dropped into this range, from -1 to 1, with the scaling set so that the most extreme input values will be set to -1 and 1 exactly.
In that case, you want to iterate over all your values (O(n), bleh!...but you're going to have to iterate over all your values to scale them no matter what, so O(n) isn't avoidable), and find both the lowest and highest values. Determine the difference between these two values, divide that by your output range, and use that as your scaling factor. For each number you want to scale, add the inverse of the minimum input value, scale by the scaling factor, and then subtract the output range offset.
That sounds like a lot of gibberish, so let me provide an example. For example after iterating, you discover your minimum input value is -300, and your maximum input value is 700. The difference between these two (abs(max-min)) is 1000. Since your output range is of size 2 (-1 to 1) divide your scale factor by 2...this gives you your final scaling factor of 1000/2 = 500.
You then take the first point in your input data...we'll say it's 334. You subtract the minimum value from it, giving you 334+300 = 634. You divide that number by your scaling factor, so you get 634/500 = 1.268. Take that number, and subtract 1 (otherwise we're going to scale to (0 to 2), instead of (-1 to 1). That gives .268. That's your answer. Do this for all the points in your set, and you're done.
A quick sanity check shows that if we try 700 (the max value in our set) we get ((700 + 300) / 500) - 1 = 1. Likewise, if we try -300 we get ((-300 + 300) / 500) - 1 = -1.
So drop that into a function, and you're good.
If your input range doesn't vary based on your input, but is a known constant, you can, of course, avoid the iteration through the data at the beginning, which would certainly be a good thing.
So hopefully that helps...I won't bother writing up the actual function...I'm guessing that will be be fairly straightforward. But if have any questions, or if I've misunderstood the concept, let me know.
Edit: Responding to your statement:
This is the get mouse movement. The values are the difference in mouse coordinates between one tick and the next. I'm not sure what the max value for that is.
Presumably there is some kind of driver clamping on this system...if a normal mouse movement would be, say (-5, 3), presumably the mouse simply can't register arbitarily large values of (1000000, 1000000) for movement. You need to find out what the operating range for the mouse is, and what values can come out of it, and that will give you your min and max values.