Remove remaining conversions from negative float64_t to unsigned
integers, which are undefined behavior.
As a result, this test will also succeed on platforms that implement
those conversions differently than x86. That addresses one of the issues
in #2815.
Add conversions from negative float16_t and float32_t to bool, all
signed integer types (i.e., including those in
GL_EXT_shader_explicit_arithmetic_types), and all float types (from the
same extension) to extend coverage.
Note that converting negative float values to unsigned integers is
undefined behavior. Thus, we exclude them.
This change adds unary conversion folding when the source is a constant.
This fixes an ISV issue whereby:
```
const float16_t f = float16_t(42.0);
```
Wouldn't compile because the conversion operator would always produce an
EvqTemporary when it could have produced an EvqConst.
I've also added a test case that proves out that all basic-type to
basic-type conversions work.