Ah. Well actual coding isn’t done in binary, you use various coding languages which all eventually trace back to ASM (Assembly) which is then translated to binary.
In terms of reading binary it’s very simple, much like reading decimal. In decimal we go from 0 to 9 (10 digits) before we move onto the next place value. So we have our 1s, 10s, 100s and so on. Each place value is 10 times the last because decimal is base 10.
Take the number 1234, here you have 4 ones, 3 tens, 2 hundreds and 1 thousand. Thus, 1234 = (1x1000) + (2x100) + (3x10) + (4x1). Using the same logic, the number 502 = (5x100) + (0x10) + (2x1).
Binary works in exactly the same way, except it is base 2, so it only goes between 0 and 1 before going to the next place value. With decimal the place values were 10 times the one before it, but because binary is only base 2, each place value is twice the one before it. So in binary, place values go up in 2s. So it will be 1, 2, 4, 8,16… This is why the ‘kilo’ prefix of 1000 isn’t actually 1000, it’s 1024 in binary. If you keep counting up in binary place values, you’ll see that you get to 1024 and not 1000. So a gigabyte is actually 1024 megabytes. This means that the number 10101 (starting from left to right) is (1x16) + (0x8) + (1x4) + (0x2) + (1x1) = 16+4+1 = 21. How these translate into words though is beyond me.
Binary is more annoying for humans to work with because it is less compact than decimal, but is far easier when dealing with computers. A lot of that is down to the fact that hardware would have to distinguish between 10 different voltage levels as opposed to 2. Now you might think that it’s easy to distinguish between say 3V and 5V, but in a CPU you’re not dealing with V, you’re dealing with mV. Getting a CPU to distinguish between 0.05mV and 0.055mV is a total pain.
Hopefully that wasn’t too complex to understand.