You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The approach for the tests in wetzel is explained in a comment in test.js : Based on a set of example schemas, the tests consist of automatically generating the property reference in different configurations, and comparing the resulting files with the "golden" output that is checked in into the repository. The exact inputs and parametrizations are summarized in the index.json.
The example schemas for the tests are apparently based on glTF. But additional features have been added to some of these schema files, and it is not always clear which aspect of the schema files are supposed to cover which functionality. One specific example: The image.schema.json is largely a glTF image, but contains some test for fractions. I think that it could make sense to break these tests down into smaller pieces that have a "semantic meaning". For example, these 'fractions' could be tested with a dedicated fractions.schema.json.
Other aspects that could be covered with dedicated tests could be
circular references
nested type definitions
the handling of additionalProperties
details about strings (patterns, lengths, formats)
Maybe important: Subtle differences regarding the JSON schema version. For example, the change of the meaning of minimum/exclusiveMinimum that was done after Draft 04 - see Range for details
The advantage would be that these schemas can be documented via the description, clearly explaining which aspect of the schema is tested there, and that it could be easier to apply specific changes or add and test specific new functionality.
There are some questions that will certainly come up either in this process, or in the medium term in general:
Which parts of JSON Schema are supposed to be supported in the first place?
What should the generated documentation look like, exactly?
In how far should details of the generated result be configurable (e.g. via CLI parameters)?
But I think that a few, first steps for creating such a set of example schemas could be done independently.
(Note: all this could be done as a pure addition to the current tests. But we might as well try to "clean up" the current schemas so that they more closely resemble the relevant parts of the current glTF schema, and use this part as a more coarse-grained "integration test". And these changes would solely be on the test schemas, and not affect any part of the actual schema generation code, just to avoid regressions)
The text was updated successfully, but these errors were encountered:
The approach for the tests in wetzel is explained in a comment in
test.js
: Based on a set of example schemas, the tests consist of automatically generating the property reference in different configurations, and comparing the resulting files with the "golden" output that is checked in into the repository. The exact inputs and parametrizations are summarized in theindex.json
.The example schemas for the tests are apparently based on glTF. But additional features have been added to some of these schema files, and it is not always clear which aspect of the schema files are supposed to cover which functionality. One specific example: The
image.schema.json
is largely a glTF image, but contains some test for fractions. I think that it could make sense to break these tests down into smaller pieces that have a "semantic meaning". For example, these 'fractions' could be tested with a dedicatedfractions.schema.json
.Other aspects that could be covered with dedicated tests could be
additionalProperties
minimum
/exclusiveMinimum
that was done after Draft 04 - see Range for detailsThe advantage would be that these schemas can be documented via the
description
, clearly explaining which aspect of the schema is tested there, and that it could be easier to apply specific changes or add and test specific new functionality.There are some questions that will certainly come up either in this process, or in the medium term in general:
But I think that a few, first steps for creating such a set of example schemas could be done independently.
(Note: all this could be done as a pure addition to the current tests. But we might as well try to "clean up" the current schemas so that they more closely resemble the relevant parts of the current glTF schema, and use this part as a more coarse-grained "integration test". And these changes would solely be on the test schemas, and not affect any part of the actual schema generation code, just to avoid regressions)
The text was updated successfully, but these errors were encountered: