Adds a transformation step to the post processing step.
Two modes are available:
1) keep
- Keeps samplers, textures and sampled textures as is
2) transform pure texture into sampled texture and remove pure samplers
- removes all pure samplers
- transforms all pure textures into its sampled counter part
Change-Id: If54972e8052961db66c23f4b7e719d363cf6edbd
Also, provides an option to auto-assign locations.
Existing tests use this option, to avoid the error message,
however, it is not fully implemented yet.
This adds infrastructure suitable for any front end to create SPIR-V loop
control flags. The only current front end doing so is HLSL.
[unroll] turns into spv::LoopControlUnrollMask
[loop] turns into spv::LoopControlDontUnrollMask
no specification means spv::LoopControlMaskNone
Adds --hlsl-iomap option to perform IO mapping in HLSL register space.
--shift-cbuffer-binding is now a synonym for --shift-ubo-binding.
The idea way to do this seems to be passing in a dedicated IO resolver, but
that would require more intrusive restructuring, so maybe best for its
own PR.
The TDefaultHlslIoResolver class and the former TDefaultIoResolver class
share quite a bit of mechanism in a common base class.
TODO: tbuffers are landing in the wrong register class, which needs some
investigation. They're either wrong upstream, or the detection in the
resolver is wrong.
New command line option --shift-ssbo-binding mirrors --shift-ubo-binding, etc.
New reflection query getLocalSize(int dim) queries local size, e.g, CS threads.
This obsoletes WIP PR #704, which was built on the pre entry point wrapping master. New version
here uses entry point wrapping.
This is a limited implementation of tessellation shaders. In particular, the following are not functional,
and will be added as separate stages to reduce the size of each PR.
* patchconstantfunctions accepting per-control-point input values, such as
const OutputPatch <hs_out_t, 3> cpv are not implemented.
* patchconstantfunctions whose signature requires an aggregate input type such as
a structure containing builtin variables. Code to synthesize such calls is not
yet present.
These restrictions will be relaxed as soon as possible. Simple cases can compile now: see for example
Test/hulsl.hull.1.tesc - e.g, writing to inner and outer tessellation factors.
PCF invocation is synthesized as an entry point epilogue protected behind a barrier and a test on
invocation ID == 0. If there is an existing invocation ID variable it will be used, otherwise one is
added to the linkage. The PCF and the shader EP interfaces are unioned and builtins appearing in
the PCF but not the EP are also added to the linkage and synthesized as shader inputs.
Parameter matching to (eventually arbitrary) PCF signatures is by builtin variable type. Any user
variables in the PCF signature will result in an error. Overloaded PCF functions will also result in
an error.
[domain()], [partitioning()], [outputtopology()], [outputcontrolpoints()], and [patchconstantfunction()]
attributes to the shader entry point are in place, with the exception of the Pow2 partitioning mode.
Since EOpMatrixSwizzle is a new op, existing back-ends only work when the
front end first decomposes it to other operations. So far, this is only
being done for simple assignment into matrix swizzles.
This partially addressess issue #670, for when the matrix swizzle
degenerates to a component or column: m[c], m[c][r] (where HLSL
swaps rows and columns for user's view).
An error message is given for the arbitrary cases not covered.
These cases will work for arbitrary use of l-values.
Future work will handle more arbitrary swizzles, which might
not work as arbitrary l-values.
- fixed ParseHelper.cpp newlines (crlf -> lf)
- removed trailing white space in most source files
- fix some spelling issues
- extra blank lines
- tabs to spaces
- replace #include comment about no location
This PR handles implicit promotions for intrinsics when there is no exact match,
such as for example clamp(int, bool, float). In this case the int and bool will
be promoted to a float, and the clamp(float, float, float) form used.
These promotions can be mixed with shape conversions, e.g, clamp(int, bool2, float2).
Output conversions are handled either via the existing addOutputArgumentConversion
function, which this PR generalizes to handle either aggregates or unaries, or by
intrinsic decomposition. If there are methods or intrinsics to be decomposed,
then decomposition is responsible for any output conversions, which turns out to
happen automatically in all current cases. This can be revisited once inout
conversions are in place.
Some cases of actual ambiguity were fixed in several tests, e.g, spv.register.autoassign.*
Some intrinsics with only uint versions were expanded to signed ints natively, where the
underlying AST and SPIR-V supports that. E.g, countbits. This avoids extraneous
conversion nodes.
A new function promoteAggregate is added, and used by findFunction. This is essentially
a generalization of the "promote 1st or 2nd arg" algorithm in promoteBinary.
The actual selection proceeds in three steps, as described in the comments in
hlslParseContext::findFunction:
1. Attempt an exact match. If found, use it.
2. If not, obtain the operator from step 1, and promote arguments.
3. Re-select the intrinsic overload from the results of step 2.
Rationalizes the entire tracking of the linker object nodes, effecting
GLSL, HLSL, and SPIR-V, to allow tracked objects to be fully edited before
their type snapshot for linker objects.
Should only effect things when the rest of the AST contained no reference to
the symbol, because normal AST nodes were not stale. Also will only effect such
objects when their types were edited.
This PR adds:
1. The "u" register class for RW* objects.
2. --shift-image-bindings (== --sib), analogous to --shift-texture-bindings etc.
3. Case insensitive reg classes.
4. Tests for above.
A need arose to use capabilities from TIntermediate during
node promotion. These methods have been moved from virtual
methods on the TIntermUnary and TIntermBinary nodes to methods
on TIntermediate, so it is easy for them construct new nodes
and so on.
This is done as a separate commit to verify that no test results
are changed as a result.
This fixes defects as follows:
1. handleLvalue could be called on a non-L-value, and it shouldn't be.
2. HLSL allows unary negation on non-bool values. TUnaryOperator::promote
can now promote other types (e.g, int, float) to bool for this op.
3. HLSL allows binary logical operations (&&, ||) on arbitrary types, similar
(2).
4. HLSL allows mod operation on arbitrary types, which will be promoted.
E.g, int % float -> float % float.
- hlsl.struct.frag variable changed to static, assignment replacd.
- Created new low level functions addBinaryNode and addUnaryNode. These are
used by higher level functions such as addAssignment, and do not do any
argument promotion or conversion of any sort.
- Two functions above are now used in RWTexture lvalue conversions. Also,
other direction creations of unary or binary nodes now use them, e.g, addIndex.
This cleans up some existing code.
- removed handling of EOpVectorTimesScalar from promote()
- removed comment from ParseHelper.cpp
If a member-wise assignment from a non-flattened struct to a flattened struct sees a complex R-value
(not a symbol), it now creates a temporary to hold that value, to avoid repeating the R-value.
This avoids, e.g, duplicating a whole function call. Also, it avoids re-using the AST node, making a
new one for each member inside the member loop.
The latter (re-use of AST node) was also an issue in the GetDimensions intrinsic decomposition,
so this PR fixes that one too.
This checkin adds a --flatten-uniform-arrays option which can break
uniform arrays of samplers, textures, or UBOs up into individual
scalars named (e.g) myarray[0], myarray[1], etc. These appear as
individual linkage objects.
Code notes:
- shouldFlatten internally calls shouldFlattenIO, and shouldFlattenUniform,
but is the only flattening query directly called.
- flattenVariable will handle structs or arrays (but not yet arrayed structs;
this is tested an an error is generated).
- There's some error checking around unhandled situations. E.g, flattening
uniform arrays with initializer lists is not implemented.
- This piggybacks on as much of the existing mechanism for struct flattening
as it can. E.g, it uses the same flattenMap, and the same
flattenAccess() method.
- handleAssign() has been generalized to cope with either structs or arrays.
- Extended test infrastructure to test flattening ability.
This PR adds the ability to offset sampler, texture, and UBO bindings
from provided base bindings, and to auto-number bindings that are not
provided with explicit register numbers. The mechanism works as
follows:
- Offsets may be given on the command line for all stages, or
individually for one or more single stages, in which case the
offset will be auto-selected according to the stage being
compiled. There is also an API to set them. The new command line
options are --shift-sampler-binding, --shift-texture-binding, and
--shift-UBO-binding.
- Uniforms which are not given explicit bindings in the source code
are auto-numbered if and only if they are in live code as
determined by the algorithm used to build the reflection
database, and the --auto-map-bindings option is given. This auto-numbering
avoids using any binding slots which were explicitly provided in
the code, whether or not that explicit use was live. E.g, "uniform
Texture1D foo : register(t3);" with --shift-texture-binding 10 will
reserve binding 13, whether or not foo is used in live code.
- Shorter synonyms for the command line options are available. See
the --help output.
The testing infrastructure is slightly extended to allow use of the
binding offset API, and two new tests spv.register.(no)autoassign.frag are
added for comparing the resulting SPIR-V.