Have you ever had to use bit shifting in real programming projects? Most (if not all) high level languages have shift operators in them, but when would you actually need to use them?
One place I use them all the time is when transposing the endian-ness of integers for cross-platform applications. They also sometimes come in handy (along with other bit-manipulation operators) when blitting 2D graphics.
I've used them a few times, but pretty much always for parsing a binary file format.
- Creating nice flag values for the enums (rather than typing manually 1, 2, 4...)
- Unpacking the data from the bit-fields (many network protocols use them)
- Z-curve traversal
- Performance hacks
And I cannot think of many cases when they are being used. It's usually other way around - there is some specific problem, and it turns out that employing bit operations will yield the best results (usually in term of performance - time and/or space).
yep. I have to write encryption algorithms before and that definitely uses them.
They are also useful when using integers etc for keeping track of statuses.
I used them a lot in image compression/decompression, where the bits in a bitmap were compressed. Using http://en.wikipedia.org/wiki/Huffman_coding the things being compressed consist of various numbers of bits (they're not all byte-aligned), and therefore you need to bit-shift them when you encode or decode them.
Reasonable article here: http://greatjustice.info/the-lost-art-of-bitmasks/
For example, in cryptographic methods implementation on languages such as C, C++. Binary files, compression algorithms and logical lists operations - bitwise operation is always good =)
Fast Fourier transform — FFT and it's Cooley-Tukey technique will require use bit shifting operations.
Yes, used them in MPEG2-2 Transport stream parser. It was easier and was better readable.
Bit shifting doesn't solve high level programming problems, but just we sometimes have to solve lower level problems, and it's convenient to not have to write a separate library in C to do it. That's when it gets used most is my guess.
I have personally used it in writing an encoder for an EBCDIC character set converter.
Bit shifts are fast. They were implemented in CPU instruction sets long before division and modulus operations were. Many of us have used bit shifts for arithmetic that is simple on pencil and paper, but not available on our CPUs.
For example:
- I've used bit shifts for projects involving factoring large composites into their prime factors.
- I have also used bit shifts for finding the square and cube root of arbitrarily large integers.
I had to write a program to parse the .ifo files on DVD discs. These are the fileds that explain how many titles, chapters, menus, etc. are on the disc. They are made up of packed bits of all sizes and alignments. I suspect many binary formats require similar bit shifting.
I have seen bitwise operators used when multiple flags were used as a property parameter. For example number 4 = 1 0 0 means that one of the three flags is set. This is not good for public API but it can speed up things in special cases since checking for bits is fast.
Every bitblt-er i ever wrote couldn't have been completed w/o ability to slide bits left and right.
I've used them on games for packing a bunch of flags into a single byte / char for saving out to a data card. Things like storing the status of unlockables etc. Not so much of a requirement nowadays, but can save work.
Yes, I have. As you might suspect it's most likely to be found in low level programming, for example developing devices' drivers. But, I worked on a C# project where I had to develop a web service that received data from medical devices. All the binary data that device stored was encoded into SOAP packets, but the binary data was compressed and encoded. So to uncompress it, you would have to do lots and lots of bit manipulations. And furthermore you would have to do lots of bit shifting to parse out any useful information, for example device serial number is a lower half of the second byte or something like that. Also I've seen some people in .NET (C#) world make a use of Bit masking and Flag Attribute, I personally never had an urge to do it.
I use it in a project for an embedded system that has to read a monitor's EDID data. Some data in an EDID is encoded like this:
Byte #3:
Horizontal Blanking -- lower 8 bits
Byte #4:
Lower Nibble: Horizontal Blanking -- upper 4 bits
Upper Nibble: something else
I still write code for systems that do not have floating point support in hardware. In these systems you need bit-shifting for nearly all your arithmetic.
Also you need shifts to generate hashes. Polynomial arithmetic (CRC, Reed-Solomon Codes are the mainstream applications) or uses shifts as well.
However, shifts are just used because they are handy and express exactly what the writer intended. You can emulate all bit-shifts with multiplication if you want to, but that would be harder to write, less readable and sometimes slower.
The compilers detect cases where multiplication can be reduced to a shift.
Yes, I've used them a lot of times. Bit twiddling is important on embedded hardware where bit-masks are very common. It's also important in games programming, when you need every last bit of performance.
Edit: Also, I use them a lot for manipulating bitmaps, for example changing the colour depth, or converting RGB <-> BGR.
Bit shifting is also required when communicating with "lower level" equiment, eq digital ethernet-IO -boxes or PLC's, which usually pack invidual input/output values into bytes.
When converting numbers from little endian to the big endian format and vice versa
Yes, still it's need.
Here in my job for example we develop softwares for comunication with PLC through the serial port COMx. It's necessary to handle bits within a byte, we use shift left / right, and logic operators OR,XOR,AND in day by day.
For example, let supose that we need turn on the bit 3 (right to left) of a byte:
It's much more efficient to do:
Byte B;
B := B XOR 4;
Instead of:
Byte B = 0;
String s; // 0 based index
s = ConvertToBinary (B);
s[5] = "1";
B := ConvertToDecimal (s);
Sorry for mistakes in English, isn't my native lang.
Regards.
Yes, when performing binary communication between Java and C# applications, one is big-endian byte ordering and the other is little-endian (not necessarily on this order). I created an InputStream class that could read numbers with a different byte order, and it used byte-shifting in order to work.
Sometimes also when you want to put 4 shorts in the 4 bytes of a long, it would be case the of using byte shifting. I think I did that many years ago...
When I wrote in assembly language, my code was full of bit-shifting and masking.
Did it a fair amount in C, as well.
Haven't done it much in JavaScript or server languages.
Probably the best modern use is to step through a packed array of boolean values represented as ones and zeros. I used to always left shift and check for sign bit in assembly, but in higher level languages you compare against a value.
For example, if you have 8 bits, you check the top bit with "if (a>127) {...}". Then you left shift (or multiply by 2), do an "and" with 127 (or do a subtraction of 256 if the last bit was set), and do it again.
Yes, bit shifting is being used at low-level embedded software all the time. It can also be used as an almost magic trick to perform extremely fast math operations, have a look at
http://betterexplained.com/articles/understanding-quakes-fast-inverse-square-root/
Yes, all the time. Like these macros for packing and unpacking a 3space coordinate to/from a 32-bit integer:
#define Top_Code(a, b, c) ((((a) + x) << 20) | (((b) + y) << 10) | ((c) + z))
#define From_Top_Code(a, b, c, f) (a = (((f) >>> 20) - x), b = ((((f) & 0xffc00) >>> 10) - y), c = (((f) & 0x3ff) - z))
I once (many, many years ago) wrote an output routine for a project which created Excel Spreadsheets using the Excel Oper structure. This was a binary file formant which required a large amount of bit twiddling. The following link gives a flavour of the Oper structure Safari Books.
I work for a computer peripheral manufacturer. I've encountered, and had to implement code that uses bit shifts, pretty much every day.
I've done some bit shifting in C#. The app needed to normalize speech audio input which requiered several math operations at the audio sample level.
Yes. Especially in an emulator which used control and status bytes for a hardware driver. Each bit in the control byte had a special meaning and each bit in the status byte had a special meaning.
Find the nearest power of two greater or equal to given number:
1 << (int)(ceil(log2(given)))
Needed for texturing on hardware that does not support arbitrary texture sizes.
Bit shifting is used a lot in deciphering the protocols of online games. The protocols are designed to use a little bandwidth as possible, so instead of transmitting the number of players on a server, names and so forth in int32s, all the information is packed into as few bytes as possible. It's not really necessary these days with most people using broadband, but when they were originally designed people used 56k modems for gaming, so every bit counted.
The most prominent examples of this are in Valve's multiplayer games particularly Counter-Strike, Counter-Strike Source. The Quake3 protocol is also the same, however Unreal isn't quite as slimline.
Here's an example (.NET 1.1)
string data = Encoding.Default.GetString(receive);
if ( data != "" )
{
// If first byte is 254 then we have multiple packets
if ( (byte) data[0] == 254 )
{
// High order contains count, low order index
packetCount = ((byte) data[8]) & 15; // indexed from 0
packetIndex = ((byte) data[8]) >> 4;
packetCount -= 1;
packets[packetIndex] = data.Remove(0,9);
}
else
{
packets[0] = data;
}
}
Of course whether you view this as a real project or just a hobby (in C#) is up to you.
Sure. Dealing with affinity masks makes use of bit shifting.
For example you might want to limit an application to using no more than a given number of processors (to take money for using each processor) - you will use bit shifts to count bits in the affinity mask?
I've done some work with novel crypto structures, hiding serialized structures (and subsequent updates to them) behind homomorphic adders. Alot of hashes also depend on bit operations; shifting amongst them.
Involed lots of bit shifting, and nasty big number operations; all in Java (admittedly, this was a reference/research implementation).
Also done compression projects (GZIP impl. in particular), where you need to pack bits quite frequently; can't imagine pulling that of without << and >> (or >>>, if you're in Java).
Basically, if you're working with crypto or compression there's a good chance you'll need bit shifting at one point or another.
I used bit shifting in a web project a while back. It was an ecommerce application where each product had a number of configurable attributes. The user could pick the attributes they desired, and the UI would update to provide pricing and a SKU for the selected options.
Rather than search through the data store for the SKU matching the user's options, each combination of options corresponded to a specific hash, which was really a number created using bit math. I allowed 4 bits (16 combinations) for each option, up to five options, so 20 bits total. To calculate the hash from the user's options, I would loop over each numbered attribute adding to the hash value:
for(var i = 0; i < getSku.arguments.length; i++)
{
index = getAttributeIndex(i, getSku.arguments[i]);
hash += (index+1) << (4*i);
}
This was quite a bit faster than looping through potentially hundreds of SKUs comparing up to 5 values each time.
No, I never really have to use them. I think that these binary operation are a bit deprecated for high level language.. As someone said, it can always be emulated with other mathematic expressions, and since those are hardly ever used, they could be built in an external library. Bit shifting in ruby or python sounds really weird to me, kind of like mixing high and low level.
I've used them in a poker hand evaluator.
A poker hand is represented as a 64-bit unsigned integer with a 1
bit for each card that is present in the hand. Through a combination of shifting and masking you can ask questions like "give me all the ranks for which I have at least 3 cards in my hand" etc.
It's reasonably fast, but I've since learned of faster methods where the hand is represented as an array of bytes.
Most packets of data are still bit encoded. If you work with any kind of low-level network communications, you are going to have to play with bits.
Also analyzing packets containing sound & video--I believe even MP3 tags use a few bits.
Most importantly, not being comfortable with bit manipulations means you will most likely miss MUCH better ways to implement some operations. I mean, if you were dealing with the existence of 5 billion ordered objects, it's the difference between being able to fit them all into ram pretty comfortably for an instant lookup or looking it up in a file every time--on a task like this I would say that someone calling themselves a software engineer who implemented it without bit manipulation was incompetent at his job.
Yes.
Bit shifting is extremely useful in embedded applications, when memory is tight and speed is everything.
For example, instead of doing a costly multiplication operation, you can perform the same calculation using only addition and bit-shifts, which is a huge time-saver:
c := 0
while b != 0
if (b and 1) != 0
c := c + a
shift a left by one
shift b right by one
return c