Bit representation
At the lowest level, computers store everything as binary — sequences of 0s and 1s. These binary sequences can represent anything and are easily interpreted by computers. One of the most frequent things that binary sequences represent are integers, which are whole numbers (negative numbers, zero, and positive numbers). These are things like 1, 5, 42, -10, or 1 million.
So how can we encode integer numbers as binary sequences?
A binary number is simply a number represented as powers of 2, which is what we call a Base 2 number. This is very similar to decimal numbers, where each number is represented as powers of 10, meaning that they are Base 10 numbers.
Decimal Numbers (Base 10)
- A single decimal number has 10 possible values ($[0-9]$). As we go beyond the first 10 digits, we add another digit to the sequence (i.e. a new column) in the number representation. Each column represents a power of 10, such as $1, 10, 100, 1000, ...$.
- The decimal number $365$ can be decomposed/represented as $365 = 3·100 + 6·10 + 5·1$. So each digit is in the range $[0–9]$ and each column is a power of 10.
Binary Numbers (Base 2)
- Binary numbers follow the same intuition, but instead of working in tens, they work in twos. So each element has two possible values ($[0, 1]$) and each digit/column is a power of 2.
- The binary number $13$ can be decomposed/represented as $13 = 1·8 + 1·4 + 0·2 + 1·1 = 8 + 4 + 1$
Whether its Base 10 or Base 2, each bit we flip either adds or removes exactly its place value. Nothing else changes. And each term is completely independent.
In binary numbers, each “column” is a single bit. So we can use a total of n bits to represent a single number. How many numbers can we represent if we use 2 bits (or 2 columns)? Let’s enumerate them: 00 01 10 11 . That’s all! So 4 numbers, which is $2^2$, where the first $2$ is the base, and $⋅^2$ is the number of bits. With $n$ bits, we can represent $2^n$ unique numbers!
Common values of $n$ are also powers of 2, so we have 2, 8, 16, or even 32-bit numbers. Data types in programming languages often indicate how the data should be interpreted by the compiler or interpreter. For example, the C++ data type int is a 32-bit type, which means that every int number consists of exactly 32 bits. How many numbers can we represent with 32-bits? Well, that’s $2^{32}$, so $4,294,967,296$ integers!
Let’s go a bit deeper into how numbers are represented and take the int number 43 as an example. In a 32-bit binary representation, we have 43 = 00000000000000000000000000101011. That seems like quite a large number, but the key takeaway is that the bits in the representation are indexed from right to left, so we have $(b_k, ···, b_2, b_1, b_0)$. We’ve seen above that this position is important. The index represents the power of 2. We can convert any binary number (Base 2) to decimal (Base 10) with the following formula:
where $k$ is the index (or position) of the bit in the sequence, and $b_k$ is the value of the bit at position $k$.
Taking the example of $43$, we can use the formula such that $1\cdot2^5 + 0\cdot2^4 + 1\cdot2^3 + 0\cdot2^2 + 1\cdot2^1 + 1\cdot2^0 = 32 + 0 + 8 + 0 + 2 + 1 = 43$. We skip the initial bits with a value of $0$ for simplicity. Notice how we build the number $43$ by summing over selected powers of 2. With this mechanism, we can represent any non-negative integer up to the 32nd power of 2. With $n$ bits, we can cover the integer range $[0, 2^n−1]$. Note that we do $2^n-1$ because we start counting at $0$.
One of the shortcomings of this representation is that we can only encode the natural numbers (the number zero and positive integers). This is what we refer to as an unsigned number. The binary representation does not encode the sign of the number, which is either $+$ or $-$, indicating if the number is positive or negative.
Signed and unsigned numbers
In the previous section, we’ve seen how we can encode the natural numbers with unsigned representations. There are many situations where negative numbers are useful, and we can encode those with signed representations. Besides the sign, one of the key differences between the representations is the range of integers that can be encoded with $n$ bits:
- An unsigned variable of $n$ bits can contain any integer between $0$ and $2^{n}-1$.
- A signed variable of $n$ bits can contain any integer between $-2^{n-1}$ and $2^{n-1} - 1$.
In a signed number representation, the first bit in the sequence carries the sign of the number (also called “the most significant bit”). So a value of $0$ in that bit will lead to positive numbers, and a value of $1$ will lead to negative numbers. The number line is split on $0$, and the remaining $n−1$ bits in the sequence will encode the magnitude of the number on either side of that line.
Intuitively, we can think of building the negative integer by starting with $-2^{n-1}$ and adding values at each decreasing power of 2 until we reach zero. Alternatively, we build a positive number by adding every power of two until $2^{n-1}-1$.
For simplicity, let’s consider an 8-bit number as an example! We know that $2^{8-1}=2^{7}=128$. So we have 8 bits, each contributing with a power of 2 up to $2^7$.
Representing $-1$.
- The binary number is
11111111. - And we build it as $-128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = -1$
Representing $1$
- The binary number is
00000001 - And we build it as $-128·0 + 64·0 + 32·0 + 16·0 + 8·0 + 4·0 + 2·0 + 1 = 1$
Representing $-128$ (min)
- The binary number is
10000000 - And we build it as $-128·1 + 64·0 + 32·0 + 16·0 + 8·0 + 4·0 + 2·0 + 1·0 = -128$
Representing $127$ (max)
- The binary number is
01111111 - And we build it as $-128·0 + 64·1 + 32·1 + 16·1 + 8·1 + 4·1 + 2·1 + 1·1 = 127$
This example should make it clear why we can only represent numbers up to $2^{n-1}-1$ with signed integers rather than the $2^{n}-1$ as with unsigned integers. The first bit contributes with the sign for negative numbers, but that power cannot be used for positive numbers. We use the zero bit sequence to represent the number $0$.
This type of design, where every bit contributes positively except the first one, is called Two’s Complement . This allows us to elegantly and efficiently do things like addition or subtraction with bit sequences.
Let’s do $-1 + 1$ in 8-bit binary:
| |
A quick note on addition in binary. When we encounter $1+1$, the result on that column is $0$, but we carry a $1$ to the following column, as we move right-to-left. This is the same as decimal addition.
In the example, above, we discard the overflow bit, represented as $[1]$, and we are left with the binary representation for $0$.
One of the key properties of Two’s Complement is that we can find the negative of any number by simply flipping all the bits, then adding 1. “Flipping all the bits” is also the same as taking the bitwise complement, or applying the NOT bitwise operation, which we will cover below.
Signed integers in binary can be visualized as a number line that wraps around itself. In an 8-bit integer, we observe that $-1$, $0$, and $1$ are all together on the number line, but also that $-128$ and $127$ are next to each other. At this end, the bits overflow, and can be discarded. This image on wikipedia has a good visualization of the process for 4-bit integers.
Formulas and ranges
Unsigned integers are defined by the formula below, where all bits contribute positively and it covers the range $[0, 2^{n}−1]$:
$$\sum_{k=0}^{n−1} b_k⋅2^k$$Signed integers, on the other hand, are defined by the formula below, in two’s complement which follows the intuition described above in the range $[−2^{n-1}, 2^{n−1}−1]$.
$$v=−b_{n−1}⋅2^{n−1}+\sum_{k=0}^{n-2}b_k⋅2^k$$We have two interpretations (signed and unsigned numbers), using the same sequence of $n$ bits. At a very low-level, machines do not care which interpretation is being used. This is typically defined in the programming language that we use. For example, in C, the data types
unsigned charandsigned charare both 8 bits and use the same addition instruction; the bit sequence11111111just means 255 in one and −1 in the other.
Bitwise operations
A bitwise operation is an operation that treats an integer purely as a sequence of $n$ independent bits and applies a rule position by position, so that bit $k$ of the output depends only on bit $k$ of the input, and never transforms any other position. There is no carrying, no borrowing, and no interaction between columns. Each bit is processed completely independently.
This is in contrast to standard arithmetic operations. When we compute $19+1$, a carry-over can affect other columns and transform multiple bit positions. This is not the case for bitwise operations.
Bitwise operators
There are four logical bitwise operations and two shift operations. The logical operations can each be interpreted as a traditional logic gate, a physical circuit that takes one or two single bits and produces one bit in return.
Logical operators
- AND (
&) — output bit is 1 only when both input bits are 1. Everything else produces 0. - OR (
|) — output bit is 1 when at least one input bit is 1. Only “0 and 0” produces 0. - XOR (
^) — output bit is 1 when the two input bits differ. Same inputs always produce 0. - NOT (
~) — Flips every bit: 0 becomes 1, 1 becomes 0.
Shift operators
- Left shift (
<<) — shifts all bits toward higher positions by $n$ steps, filling the vacated low bits with 0. This is equivalent to multiplying by $2^n$, because each position represents a power of 2, although it only works as long as no significant bits overflow off the top. - Right shift (
>>) — shifts all bits toward lower positions by $n$ steps, discarding the bits that fall off. For unsigned integers, the vacated high bits fill with 0 (logical shift). For signed integers, in most cases, the sign bit is preserved (arithmetic shift).
The shifts are sometimes considered separately from the “true” bitwise operations because they do have a positional dependency. However, this dependency is only on the input. The $k$-th bit of the output depends on the bits $k \pm n$ of the input. This does not imply any bit carry-over, though.
Let us now define each of the operators and gain an intuition for what they are doing on the bit sequence. One way to conceptualize it is that one of the inputs is actually a bit-mask, which either blocks or allows specific bits. A mask is a number where certain bits are set so that we can modify parts of another number.
AND (&)
The operator AND is a filter. If we think of one operand as the value, and the other as mask, then AND is simply a filter that blocks or allows bits. Wherever the mask has a 1, the value’s bit passes through unchanged. Wherever the mask has a 0, the output is forced to 0, and that bit is blocked. For this reason, AND is often used to isolate or check specific bits.
| |
OR ( | )
The operator OR is a setter. The output is 0 only when both inputs are 0. Where the mask has a 1, the output bit is forced to 1 regardless of the original bit value. Where the mask has a 0, the input bit passes through unchanged. It’s the complement of AND’s filter role. Instead of clearing bits, OR sets them.
| |
XOR ( ^ )
The operator XOR is a toggler. The output is 1 when the inputs differ, 0 when they are the same. Where the mask has a 1, the bit is flipped. Where the mask has a 0, the bit passes through unchanged. It also has the interesting property of cancelling out the same values. If we take a bit sequence $A$, we have $A \oplus B \oplus B = A$. So applying XOR to the same value twice cancels out completely, which means it is its own inverse.
| |
NOT ( ~ )
The operator NOT is the only unary bitwise operator. It simply inverts every bit.
We’ve seen above that for signed integers with two’s complement, NOT has an arithmetic meaning. ${\sim}A = -A - 1$, or equivalently $A + {\sim}A = -1$ (all ones).
Additionally, ${\sim}A = A \oplus \underbrace{11\ldots1}_{n}$. This is XOR with an all-ones mask, which means XOR can be used as NOT.
Shifts ( << and >> )
We formally define a left shift by $n$ as:
$$(A \mathbin{<<} n)_k = \begin{cases} a_{k-n} & \text{if } k \geq n \\ 0 & \text{if } k < n \end{cases}$$This means that each bit moves up $n$ positions, with the bottom $n$ positions filling with 0.
The approach to right shifts is similar, but the direction reverses. In this case, the bits move down, and the top $n$ positions are filled with 0 for unsigned integers, or with the sign bit for signed integers.
There are two core applications of left and right shifts. One is multiplication/division and the other is building masks.
Multiplication and division
One of the most important intuitions for left and right shifts is multiplication and division. Generally speaking, a left shift by $n$ means we multiply by $2^n$, and a right shift by $n$ means we divide by $2^n$.
$$A≪n=A×2^n \quad A≫n=⌊A/2^n⌋$$This intuition falls directly out of the formula for binary numbers, where each position in the bit sequence corresponds to a power of $2$. Shifting every bit up by one position doubles every place value it sits in, which multiplies the number by 2. Shifting down halves it.
Building masks
In all common bit tasks, we always begin by creating a bit mask, which is essentially a bit sequence with a single bit turned on at a critical position, such as 00010000. This allows us to manipulate the bit at position without affecting all others. We never really hardcode masks, but we create them programmatically with expressions such as 1 << k where k is a variable. That single expression is the seed for most of the programming bit tricks.
We can create single-bit masks, or range masks. The subtraction trick (1 << n) - 1 produces exactly $n$ ones in a row, which we then shift into position.
Bit masks
Bit masks come out of left and right shift operations. We define bit masks programmatically by using an expression such as x << k. What does this expression mean exactly?
- The
xindicates the starting bit pattern, which is typically1. The number 1 evaluates to00000001using an 8-bit representation. - The operator
<<is the left shift, which we described earlier. - The value
kis the number of place values or positions that we are shifting the bit sequence.
Here is an example:
| |
So we shift the number 1 by two positions, arriving at the binary number 00000100. It turns out that this is the decimal number $4$, which is the same as multiplying $1\cdot2^2 = 4$.
The pattern 1 << k is extremely popular because it is short-hand to create a binary sequence with exactly 1 bit turned on at position $k$.
| |
These masks can then be used with logical bitwise operators to build powerful bit manipulation operations.
Bit tasks
There are five fundamental bit tasks that come up constantly, either in real world applications or in programming challenges and exercises from platforms like LeetCode. They all follow the same structure: build a bit mask with 1 << k, then apply an appropriate logical operator. The five tasks are summarized in the table below, and we’ll do a deep dive for each one of them in the remainder of this section.
| Task | Expression | Operator |
|---|---|---|
| Get | (x >> k) & 1 | AND |
| Set | x | (1 << k) | OR |
| Clear | x & ~(1 << k) | AND, NOT |
| Toggle | x ^ (1 << k) | XOR |
| Update | (x & ~(1<<k)) | (v<<k) | AND, NOT, OR |
Get bit
The aim of this task is to check the value of a bit at position $k$, and return its value. This task is often also called check bit.
| |
We first create a bit mask (x >> k), where the input binary sequence $x$ is right shifted by $k$ positions. This operation moves the bit at the original position $k$ to position $0$. We then use AND to compare that shifted sequence with the number 1. This has the effect of masking every bit position except the 0th position.
Let’s work through an example. Our goal is to check the bit at position 3 of x = 43 (00101011):
| |
Alternatively, we can run the check in the opposite direction by using x & (1 << k). This operation creates a mask full of zeros except at position $k$. We then use the mask with the AND logical operator, flipping all bits to 0 except the value at $k$. If the bit is set, then we return a non-zero value at position $k$, otherwise it is all zeros. This expression returns $0$ (bit not set) or $2^k$ (bit is set). For a simple 0/1 output, we need to add a comparison, such as (x & (1 << k)) != 0.
Set bit
The set bit task forces the bit at position $k$ to be $1$, regardless of its current value.
| |
As before, we create a bit mask by using the left shift operation (1 << k), which returns all zeros except at position $k$. We then use OR, the setter operator between the input sequence $x$ and the mask. All input bits are left unchanged where the mask is $0$, except at position $k$. At that position, the original bit is always $1$, whether it was originally set or not.
| |
Clear bit
The clear bit task forces the bit $k$ to be $0$, regardless of its current value. This is practically the reverse of the set bit task.
| |
We create a mask by left shifting 1 to position $k$, then we negate that mask with NOT by flipping all bits except the one at the target position. We apply AND as a filter operator, which lets everything pass except the bit at position $k$.
| |
Some variants of the clear bit task focus on clearing all bits up to $k$ (inclusive) or clearing all bits from $k$ to $0$ (inclusive).
Toggle bit
The toggle bit task flips the bit at position $k$. So if this bit was $0$, it becomes $1$. And if it was $1$, it becomes $0$.
| |
This is quite similar to setting a bit, but we use XOR instead of OR. From our description above, we see XOR as a toggler. So we first create a mask full of zeros except at position $k$. Applying XOR between $x$ and the mask means every original bit remains unchanged, except at position $k$. Remember that with XOR, we have 1,1 -> 0 and 0,1 -> 1. So the original bit always changes when the mask is $1$.
| |
In this example, we arrived at the same output as with clear bit because the original bit was $1$. But we can choose a different position:
| |
Bonus task: Update bit
This is an additional task that is often skipped from the main list. The goal is to update the value of a bit to some specific input in $(0,1)$. It is basically a generalization of the set and clear tasks that accepts an extra input $v$ with the target value of the bit at position $k$. The bit at position $k$ gets set to $v$, regardless of its original value.
| |
Note that this expression has three steps. Let’s work through these one at a time.
First, we do a clear bit operation, where we clear the bit at position $k$: (x & ~(1<<k)). This follows the same intuition as above.
Then, we create a bit mask with v<<k, which left shifts the value v by k positions. The output of this step is a mask with all bits set to $0$ and the bit at position $k$ carrying the value $v$. The only difference here is the starting value of the shift operation. We know that $v$ is either $1$ or $0$, so we do 1<<k or 0<<k.
Finally, we apply the OR operator between the sequence with the cleared bit and the new mask, which is the same as the set bit task.
Applications
The most immediate application of bit manipulation techniques for most programmers is coding challenges and technical interviews. Bit manipulation problems appear on platforms like LeetCode, and can occasionally show up in interview questions. But this toolbox is surprisingly absent in common Algorithms and Data Structures syllabus and roadmaps . These learning notes will hopefully help provide some intuition on this topic. Most programming challenges can be reduced to some combination of get, set, clear, toggle, and update tasks.
Beyond that, bit manipulation can be a useful set of tools for specific problems. For example, in systems and embedded programming, device registers are controlled by setting and clearing individual bits. Unix file permissions (chmod 755) are bit fields. Subnet masks in networking are bitwise AND operations applied to IP addresses. I particularly enjoyed reading how we can pack RGB values into single integers. Finally, bit operations show up in performance-sensitive code, using shifts as a cheaper alternative to multiply and divide, packing multiple boolean flags into a single integer, or encoding two small values into one word. This is less common in general application code, but relevant in game engines, compilers, and data compression algorithms.
Interactive Demo
I am putting together an interactive tutorial to practice the operations covered in this post. It is still a work in progress, but feel free to try it out.
References
- All About Bit Manipulation — GeeksforGeeks
- Bit manipulation — Wikipedia
- Bit manipulation — CP Algorithms
- Two’s complement — Wikipedia
Parts of this post were developed with assistance from Claude (Anthropic) and ChatGPT (OpenAI).
- Apr 8, 2026 — First published.
- Apr 10, 2026 — Add references section and link to interactive tutorial.