why do we need integers and floating in processor?thank you
They are all just numbers, so you might think you don't need to distinguish. But, in many languages there's an optimization possible when doing integer math - addition, subtraction, multiplying and dividing are possible with CPU instructions. Likewise there are instructions that do simnilar operations on floating point numbers, but because the numbers are represented differently in the machine, and operations are different, then it makes sense to bubble up the distinction between integers and floats that is apparent in the processor, into the programming language itself.
C#, Java, C++, and other languages all have distinct types for handling integers versus floats. Javascript makes the opposite choice - there is no special integer type, and if I am not mistaken, all numbers are floats.
As for why you need ints and floats - floats allow a much wider range of values, although at extreme ends (eg, astronomical quantities) the precision falls off. You can't represent 1.37999933247474x10e24 precisely in floating point math. On the other hand, ints offer precision and speed for a fixed set of numbers.
Integers are easier on processor resources, and often faster. This was a big deal many years ago, when processors didn't even come with built-in floating point capabilities. Not so much now, but the differences can still be significant in tight code.
Integers are often all that you need.
Floating point values have a [much] greater range than integers. Also, they can represent fractional values. These features however are provided at the cost of a loss of precision.
Edit: (what I mean by loss of precision)
Integer arithmetic is always exact, so long that one doesn't provide operand which cause an overflow, or a division by zero.
This is not the case with floating point arithmetic, whereby some portions of values may be lost when using such values in simple operations. The reason for this is that the tremendous range offered by floating point values, is such that it is impossible to represent all contiguous values within the range, given the [relatively] small storage (typically 8 or 16 bytes).
Hi
Integers are for counting, floating-point numbers are for calculating. We have them both in maths (where they are called integers and real numbers respectively) so we need them in algorithms and in programs too. End of story.
Sure, the range of most fp number implementations is larger than the range of most integer implementations but I could invent a language tomorrow in which I allow 512-bit integers but only 16-bit floating-point numbers (1 sign bit, 3 exponent bits, 12 significand bits). The integers are still not closed under division and the floating-point numbers are still no use for counting because while there is a successor function on fp numbers there isn't on real numbers and we like to pretend that fp numbers are a close implementation of real numbers.
No, integers are not easier on the processor, the processor does fundamental boolean logic operations on bits. And if processor X1 does integer arithmetic faster than fp arithmetic, a trawl through the memory banks will find a counter example.
We don't even need fp numbers for fractions, we could use pairs of integers to represent numerator and denominator.
The absolute precision of integers is why we use them for counting. For all practical purposes the precision of existing fp implementations is enough (now, there's a wild claim to attract disagreement !)
Regards
Mark
Integers are the most common things to use in programming tasks. They can represent memory addresses. It's easy to count from one integer to the next: just add one.
Floating-point values are used to approximate real numbers. Real numbers are the most common kind of thing in continuous math. Continuous math is used to represent the real world. (Hence the terminology "real number.")
Floating-point values cannot usually be used as integers. You can't easily count from X to the next number greater than X. They round off, and there is no guarantee that X + 1 is even a different number than X. Generally speaking, two floating-point numbers might be different if they were produced by different sequences of operations, even if the expressions are supposed to be equal.
Floating-point numbers are unpredictable, like real life. Integers are ordered and efficient, like computers.
In most applications floating-point numbers can be replaced by integers, by carefully defining what range of values needs to be represented at what precision, and multiplying with appropriate scaling factors. However, this is additional development effort, which is only worthwhile on small embedded platforms (i.e. small microcontrollers) which can't do the calculations in floating-point arithmetic in the available time.
With floating-point numbers you can get away without thinking about the representation of values most of the time, as long as you stay within the available range and precision. Unfortunately this is rather dangerous, because that way you may not notice when you leave the safe region.
A slightly different perpective: Integers are useful for digital quantities, while floats are useful for analogue quantities. An example, while looking at boats in the harbour, use ints to count the boats, use floats to represent the water level.