C and C++ can run on many different architectures, and machine types. Consequently, they can have different representations of numbers: Two’s complement, and Ones’ complement being the most common. In general you should not rely on a particular representation in your program.

For unsigned integer types (`size_t`

being one of those), the C standard (and the C++ standard too, I think) specifies precise overflow rules. In short, if `SIZE_MAX`

is the maximum value of the type `size_t`

, then the expression

`(size_t) (SIZE_MAX + 1)`

is guaranteed to be `0`

, and therefore, you can be sure that `(size_t) -1`

is equal to `SIZE_MAX`

. The same holds true for other unsigned types.

Note that the above holds true:

- for all unsigned types,
*even if the underlying machine doesn’t represent numbers in Two’s complement*. In this case, the compiler has to make sure the identity holds true.

Also, the above means that you can’t rely on specific representations for *signed* types.

*Edit*: In order to answer some of the comments:

Let’s say we have a code snippet like:

```
int i = -1;
long j = i;
```

There is a type conversion in the assignment to `j`

. Assuming that `int`

and `long`

have different sizes (most [all?] 64-bit systems), the bit-patterns at memory locations for `i`

and `j`

are going to be different, because they have different sizes. The compiler makes sure that the *values* of `i`

and `j`

are `-1`

.

Similarly, when we do:

```
size_t s = (size_t) -1
```

There is a type conversion going on. The `-1`

is of type `int`

. It has a bit-pattern, but that is irrelevant for this example because when the conversion to `size_t`

takes place due to the cast, the compiler will translate the *value* according to the rules for the type (`size_t`

in this case). Thus, even if `int`

and `size_t`

have different sizes, the standard guarantees that the value stored in `s`

above will be the maximum value that `size_t`

can take.

If we do:

```
long j = LONG_MAX;
int i = j;
```

If `LONG_MAX`

is greater than `INT_MAX`

, then the value in `i`

is implementation-defined (C89, section 3.2.1.2).