Speaking of shackles of C: any ideas why did C choose "signed int" as the default? Especially the signed part

Assuming you mean arithmetic versus logical shifting, it says that an unsigned right shift is logical, while a signed right shift is platform-dependent (but must be either arithmetic or logical).

Often C inherited semantics from the underlying PDP-11 architecture, where twos complement, signed arithmetic “came for free.” Sure, unsigned ints let you trade away the sign for 2x extra range, but for most integer algorithms you’ll want the sign. (Going for more range is an engineering decision you make when cornered.) In the days of integer BASIC, which lacked type declarations, the assumption was signed numbers. Why would C be any different? Finally, when you look at the functionality and tiny source code of the earliest UNIXen, you realize that K&R were masters of Minimalism.

1 Like