We know that for example modulo of power of two can be expressed like this:

  x % 2 inpower n == x & (2 inpower n - 1).


x % 2 == x & 1
x % 4 == x & 3
x % 8 == x & 7 

What about general nonpower of two numbers?

Let's say:

x % 7==?

+8  A: 

This only works for powers of two (and frequently only positive ones) because they have the unique property of having only one bit set to '1' in their binary representation. Because no other class of numbers shares this property, you can't create bitwise-and expressions for most modulus expressions.

If you happen to be operating on a ternary architecture, then that changes things a bit...chances are about nil however.
+5  A: 

First of all, it's actually not accurate to say that

x % 2 == x & 1

Simple counterexample: x = -1. In many languages, including Java, -1 % 2 == -1. That is, % is not necessarily the traditional mathematical definition of modulo. Java calls it the "remainder operator", for example.

With regards to bitwise optimization, only modulo powers of two can "easily" be done in bitwise arithmetics. Generally speaking, only modulo powers of base b can "easily" be done with base b representation of numbers.

In base 10, for example, for non-negative N, N mod 10^k is just taking the least significant k digits.


`-1 = -1 (mod 2)`, not sure what you're getting at - you mean it's [not the same](http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.17.3) as the IEEE 754 remainder?
BlueRaja - Danny Pflughoeft
@BlueRaja: the common residue for -1 in mod 2 is 1 http://en.wikipedia.org/wiki/Modular_arithmetic#Remainders
@BlueRaja: If you allow negative numbers, what you basically can be sure of (particularly since no language was mentioned) is that `(a / b) / b + a % b == a`, for C-type operators, a and b integers, b nonzero, and also that `abs(a % b) < abs(b)` with the same provisos.
David Thornley
+2  A: 

This is specifically a special case because computers represent numbers in base 2. This is generalizable:

(number)base % basex

is equivilent to the last x digits of (number)base.

+1  A: 

Not using the bitwise-and (&) operator in binary, there is not. Sketch of proof:

Suppose there were a value k such that x & k == x % (k + 1), but k != 2^n - 1. Then if x == k, the expression x & k seems to "operate correctly" and the result is k. Now, consider x == k-i: if there were any "0" bits in k, there is some i greater than 0 which k-i may only be expressed with 1-bits in those positions. (E.g., 1011 (11) must become 0111 (7) when 100 (4) has been subtracted from it, in this case the 000 bit becomes 100 when i=4.) If a bit from the expression of k must change from zero to one to represent k-i, then it cannot correctly calculate x % (k+1), which in this case should be k-i, but there is no way for bitwise boolean and to produce that value given the mask.

Heath Hunnicutt
+1  A: 

Using bitwise_and, bitwise_or, and bitwise_not you can modify any bit configurations to another bit configurations (i.e. these set of operators are "functionally complete"). However, for operations like modulus, the general formula would be necessarily be quite complicated, I wouldn't even bother trying to recreate it.

Lie Ryan
+1  A: 

There are moduli other than powers of 2 for which efficient algorithms exist.

For example, if x is 32 bits unsigned int then x % 3 = popcnt (x & 0x55555555) - popcnt (x & 0xaaaaaaaa)

David Harris

In this specific case (mod 7), we still can replace %7 with bitwise operators:

// Return X%7 for X >= 0.
int mod7(int x)
  while (x > 7) x = (x&7) + (x>>3);
  return (x == 7)?0:x;

It works because 8%7 = 1. Obviously, this code is probably less efficient than a simple x%7, and certainly less readable.

Eric Bainville