The Binary Search problem

Many developers will have to learn all kinds of algorithms in their lives so they can write highly optimized code. Many of these algorithms have long histories and are well-tested. And one of them is the binary search method.

The binary search is a fast algorithm to find a record in a sorted list of records. For most people, this is a very familiar algorithm if you had to ever guess a value between 1 and 10, or 1 and 100. The principle is quite simple. You have an x amount of records and you pick the record in the middle of the list. For guessing between 1 and 100, you would pick 50. (100/2) If it is the correct value, you’ve won. If it is too high, you now know the value must be between 1 and 50 so you guess again with value 25. Too low and you pick the value between 50 and 100, which would be 75. You should be able to guess the value in up to 8 tries for values between 1 and 100.

Actually, the binary search is actually the easiest explained as bit-wise checking of values. A single byte will go from 00000000 to 11111111 so basically all you do is a bitwise compare from the highest bit to the lowest. You start with value 10000000 (128) and if the value you search for is higher, you know that first bit is 1, else it needs to be 0.

Your second guess would either be 11000000 (192) or 01000000 (64) and you would continue testing bits until you’ve had all bits tested. However, your last test could also indicate that you guessed wrong so the maximum number of guesses would be equal to the number of bits plus one.

And that’s basically what a binary search is. But it tends to be slightly more complicated. You’re not comparing numbers from 0 to some maximum value but those numbers are generally a kind of index for an array, and you compare the value at the position in the array. You basically have a value X which could basically be any data type and even be a multi-field record and you have an array of records which has all data sorted for some specific index. And these arrays can be reasonably large. Still, the binary search will allow you some very quick search.

Now, the biggest problem with the binary search is how people will calculate the index value for the comparison. I already said that you could basically check the bits from high to low but most developers will use a formula like (floor+ceiling)/2 where floor would be the lowest index value and ceiling the highest index value. This can cause an interesting problem with several programming languages because there’s a risk of overflows when you do it like this!

So, overflow? Yes! If the index is an unsigned byte then it can only hold a value of 11111111 (255) as a maximum value. So as soon when you have a floor value of 10000000 (128) and a ceiling of at least (10000001) then the sum would require 9 bits. But bytes can’t contain 9 bits so an overflow occurs. And what happens next is difficult to predict.

For a signed byte it would be worse, since value 1000000 would be -128 so you would effectively have 7 bits to use. If the 8th bit is set, your index value becomes negative! This means that with a signed byte, your array could never be longer than 64 records, else this math will generate an overflow. (64+65 would be 129, which translates to -127 for signed bytes.)

Fortunately, most developers use integers as index, not bytes. They generally have arrays larger than 256 records anyways. So that reduces the risk of overflows. Still, integers use one bit for the sign and the other bits for the number. A 16-bit integer thus has 15 bits for the value. So an overflow can happen if the number of records has the highest bit value set, meaning any value of 16384 and over. If your array has more than 16384 records then the calculation (floor+ceiling)/2 will sometimes generate an overflow.

So, people solved this by changing the formula to floor+((ceiling-floor)/2) because ceiling-floor cannot cause an overflow. It does make the math slightly more complex but this is the formula that most people are mostly familiar with when doing a binary search!

Yet this formula makes no sense if you want a high performance! If you want a binary search, you should actually just toggle each bit for the index until you found the value. To do so, you need to know how many bits you need for the highest value. And that will also tell you how many guesses you will need, at most, to find the value. But this kind of bitwise math tends to be too complex for most people.

So, there is another solution. You can promote the index value to a bigger one. You could use a 32-bit value if the index is a 16-bit value. Thus, you could use (int16(((int32)floor+(int32)ceiling)/2) and the overflow is gone again. And for a 32-bit index you could promote the math to a 64-bit integer type and again avoid any overflows.

It is still less optimal than just toggling bits but the math still looks easy and you can explain why you’re promoting the values.

But what if the index is a 64-bit value? There are almost no 128-bit values in most programming languages. So how to avoid overflows in those languages?

Well, here’s another thing. As I said, the index value is part of an array. And this array is sorted and should not have any duplicate values. So if you have 200 records, you would also need 200 unique values, with each value being at least 1 byte in size. If the index is a 15-bit signed integer then the values in the array must also be at least 15-bits and would generally be longer. Most likely, it would contain pointers to records elsewhere in memory and pointers are generally 32-bits. (In the old MS-DOS era, pointers were 20 bits, so these systems could manage up to 1.048.576 bytes or 1 megabyte of memory.)

So, let’s do math! For an overflow to occur with an index as a signed 16-bit integer you would need to have at least 16384 records. Each record would then be at least 2 bytes in size, thus you would have at least 32 kilobytes of data to search through. Most likely even more, since the array is probably made up by pointers pointing to string values or whatever. But 21 KB would be the minimum to occur when using a 16-bit signed index.

So, a signed 32-bit index would at least have bit 30 set to 1 before an overflow can occur. It would also need to contain 32-bit values to make sure every value is unique so you would have 4 GB of data to search through. And yes, that is the minimum amount of data required before an overflow would occur. You would also need up to 31 comparisons to find the value you’re searching for, which is becoming a bit high already.

So, a signed 64-bit index would have records of at least 8 bytes in size! This requires 36.893.488.147.419.103.232 bytes of data! That’s 33.554.432 terabytes! 32.768 petabytes! 32 exabytes! That’s a huge number of data, twice the amount of data stored by Google! And you need more data than this to get an overflow. And basically, this is assuming that you’re just storing 64-bit integer values in the array but in general, the data stored will be more complex.

So, chances of overflows with a 32-bit index are rare and on 64-bit indices it would be very unlikely. The amount of data required would be huge. And once you’re dealing with this much data, you will have to consider alternate solutions instead.

The alternate solution would be hash tables. By using a hash function you could reduce any value to e.g. a 16-bit value. This would be the index of an array of pointers with a 16-bit index so it would be 256 KB for the whole array. And each record in this array could be pointing to a second, sorted array of records so you would have 65536 different sorted arrays and in each of them you could use a binary search for data. This would be ideal for huge amounts of data, although things can be optimized better to calculate to an even bigger hash-value. (E.g. 20 bits.)

The use of a hash table is quite easy. You calculate the hash over the value you’re searching for and then check the list at that specific address in your hash table. If it is empty then your value isn’t in the system. Otherwise, you have to search the list at that specific location. Especially if the hash formula is evenly distributing all possible values then a hash table will be extremely effective.

Which brings me to a clear point: the binary search isn’t really suitable for large amounts of data! First of all, your data needs to be sorted! And you need to maintain this sort order every time when you add or delete items to this record, or when you change the key value of a record! Hash tables are generally unsorted and have a better performance, especially with large amounts of data.

So, people who use a 32-bit index for a binary search are just bringing themselves in trouble if they fear any overflows. When they start using floor+((ceiling-floor)/2) for their math, they’re clearly showing that they just don’t understand the algorithm that well. The extra math will slow down the algorithm slightly while the risk of overflows should not exist. If it does exist with a 32-bit index then you’re already using the wrong algorithm to search for data. You’re at least maintaining an index of 4 GB in size, making it really difficult to insert new records. That is, if overflows can occur. The time needed to sort that much data is also quite a lot and again, far from optimal.

Thing is, developers often tend to use the wrong algorithms and often have the wrong fears. Whenever you use a specific algorithm you will have to consider all options. Are you using the right algorithm for the problem? Are you at risk of having overflows and underflows? How much data do you expect to handle? And what are the alternative options.

Finally, as I said, doing a binary search basically means toggling bits for the index. Instead of doing math to calculate the half value, you could instead just toggle bits from high to low. That way, you never even have a chance of overflows.