Skip to content

Latest commit

 

History

History
137 lines (99 loc) · 5.02 KB

Development.md

File metadata and controls

137 lines (99 loc) · 5.02 KB

Development

Few notes that should help to better understand the development process of the project.

Tools

Swift Package Manager

The project is built with Swift Package Manager. You can find the definition in Package.swift file.

SwiftLint

We have set up SwiftLint to enforce Swift style and conventions. You can run the following command to check if your code does not violate any of the rules.

swiftlint

SwiftFormat

We have set up SwiftFormat to enforce consistent code formatting. You can run the following command in the root project directory to format your code.

swiftformat .

Project Structure

The file structure of xcdiff with comments is below.

.
├── CommandTests
│   └──Generated
├── Fixtures
├── Sources
│   ├── XCDiff              # main.swift
│   ├── XCDiffCommand       # command line interface
│   └── XCDiffCore
│       ├── Comparator      # all comparators
│       ├── Library         # helpers
│       └── ResultRenderer  # formaters
│           ├── Output
│           ├── ProjectResultRenderer
│           └── Renderer
└── Tests
    ├── XCDiffCommandTests
    └── XCDiffCoreTests

Tests

Fixtures

The project has a Fixtures folder that hosts sample projects that aid the development and testing of xcdiff.

For example, a quick way to test out a local version of xcdiff is via:

cd Fixtures
swift run xcdiff

That will run xcdiff against two sample projects that have a diverse set of differences. Those fixtures are also use as part of the automated Command Tests.

Command Tests

Command Tests (as we call them) are a very convenient type of integration test, they help to test command line and core layers of xcdiff as well as the integration with the underlying XcodeProj framework.

CommandBasedTests.swift scans the CommandsTests directory for markdown files, each of which represents a single integration test.

The markdown files follow a very specific pattern:

# Command
```json
<JSON REPRESENTATION OF THE COMMAND>
```

# Expected exit code
<NUMBER>

# Expected output
```
<CONTENT OF THE OUTPUT>
```

Additionally there are predefined variables that can be used:

  • {ios_project_1} - evaluates to Fixtures/ios_project_1/Project.xcodeproj
  • {ios_project_2} - evaluates to Fixtures/ios_project_2/Project.xcodeproj

For example, the command:

xcdiff -p1 Fixtures/ios_project_1/Project.xcodeproj -p2 Fixtures/ios_project_2/Project.xcodeproj

is covered by the following integration test.

# Command
```json
["-p1", "{ios_project_1}", "-p2", "{ios_project_2}"]
```

# Expected exit code
1

# Expected output
```
✅ TARGETS > NATIVE targets
✅ TARGETS > AGGREGATE targets
❌ SOURCES > "Project" target
✅ SOURCES > "ProjectTests" target
✅ SOURCES > "ProjectUITests" target

```

CommandTests/Generated contains test files that are auto-generated by the script Scripts/generate_tests_commands_files.py. You can control the generated test via modifying the following json files:

  • manual_test_commands.json: This file hosts a series of commands that we would like to generate test cases for. This list itself is updated manually whenever we need to add more CommandTests.
    • alias: A short file system friendly name to prefix the generated test with
    • command: An array of tokenized parameters to construct the command from
    • comment: A comment as to the purpose of this test
  • generated_test_commands.json: This file hosts a few parameters such as targets and tags from which a series of combinatorial commands are generated (quite meta!)
    • e.g. "targets": ["A", "B"], "tags": ["targets", "sources"] generates CommandTests for:
      • ["-p1", "{ios_project_1}", "-p2", "{ios_project_2}", "-g", "targets", "-t", "A"]
      • ["-p1", "{ios_project_1}", "-p2", "{ios_project_2}", "-g", "sources", "-t", "A"]
      • ["-p1", "{ios_project_1}", "-p2", "{ios_project_2}", "-g", "targets", "-t", "B"]
      • ["-p1", "{ios_project_1}", "-p2", "{ios_project_2}", "-g", "sources", "-t", "B"]

Once generated, the markdown files within CommandTests/Generated allow us to focus on reviewing the command and its corresponding output to ensure we are happy with the results. Additionally, it helps us flag any unwanted changes or regressions to those results in the future as we add or modify comparators. Check git status for new markdown files if you have trouble finding the newly generated files.

IMPORTANT: The script needs be updated and run every time we add a new comparator to re-generate the test cases.