The standard way to do division is by implementing binary long-division. This involves subtraction, so as long as you don’t discount this as not a bit-wise operation, then this is what you should do. (Note that you can of course implement subtraction, very tediously, using bitwise logical operations.)
In essence, if you’re doing Q = N/D
:
- Align the most-significant ones of
N
andD
. - Compute
t = (N - D);
. - If
(t >= 0)
, then set the least significant bit ofQ
to 1, and setN = t
. - Left-shift
N
by 1. - Left-shift
Q
by 1. - Go to step 2.
Loop for as many output bits (including fractional) as you require, then apply a final shift to undo what you did in Step 1.