The first number is precision the second number is scale. The equivalent in SQL Server can be as Decimal / Numeric and you could define it like so:
DECLARE @MyDec decimal(18,2)
The 18 is the max total number of decimal digits that can be stored (that is the total number of digits, for instance 123.45 the precision here is 5, while the scale is 2). The 2 is the scale and it specifies the max number of digits stored to the right of the decimal point.
See this article
Just remember the more precision the more size in storage bytes. So keep it at a minimum if possible.
p (precision)
Specifies the maximum total number of
decimal digits that can be stored,
both to the left and to the right of
the decimal point. The precision must
be a value from 1 through the maximum
precision. The maximum precision is
38. The default precision is 18.
s (scale)
Specifies the maximum number of
decimal digits that can be stored to
the right of the decimal point. Scale
must be a value from 0 through p.
Scale can be specified only if
precision is specified. The default
scale is 0; therefore, 0 <= s <= p.
Maximum storage sizes vary, based on
the precision.
Finally, it is worth mentioning that in oracle you can define a scale greater then a precision, for instance Number(3, 10) is valid in oracle. SQL Server on the other hand requires that the precision >= scale. So if you defined Number(3,10) in oracle, it would map into sql as Number(10,10).