Skip to content

Commit

Permalink
Final aesthetic updates to the changeloc online documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
jatkinson1000 committed Jun 27, 2024
1 parent f97909c commit 60216fe
Showing 1 changed file with 16 additions and 10 deletions.
26 changes: 16 additions & 10 deletions pages/updates.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,19 +16,22 @@ will be avoided. We hope that this is the last time we have such a shift.

The changes allow us to implement two new features:

#. Multiple output tensors
1. Multiple output tensors
Previously you could pass an array of several input tensors to a Torch model, but
only recieve a single output tensor back. Now you can use models that return several
output tensors by passing an array of output tensors instead.
#. Preparation for autograd functionality
2. Preparation for autograd functionality
We hope to make it easier to access the autograd features of PyTorch from within Fortran.
To do this we needed to change how data was assigned from a Fortran array to a Torch tensor.
This is now done via a subroutine call rather than a function.

<br>

## Changes and how to update your code

### `torch_tensor`s are created using a subroutine call, not a function
<br>

#### `torch_tensor`s are created using a subroutine call, not a function

Previously you would have created a Torch tensor and assigned some fortran data to it as follows:
```fortran
Expand All @@ -38,7 +41,7 @@ integer :: tensor_layout(1) = [1]
my_tensor = torch_tensor_from_array(fortran_data, tensor_layout, torch_kCPU)
```

<br>
Now a call is made to a subroutine with the tensor as the first argument:
```fortran
real, dimension(5), target :: fortran_data
Expand All @@ -48,8 +51,9 @@ integer :: tensor_layout(1) = [1]
call torch_tensor_from_array(my_tensor, fortran_data, tensor_layout, torch_kCPU)
```

<br>

### module becomes model and loading becomes a subroutine call, not a function
#### `module` becomes `model` and loading becomes a subroutine call, not a function

Previously a neural net was referred to as a '`module`' and loaded using appropriately
named functions and types.
Expand All @@ -58,7 +62,7 @@ type(torch_module) :: model
model = torch_module_load(args(1))
call torch_module_forward(model, in_tensors, out_tensors)
```

<br>
Following user feedback we now refer to a neural net and its associated types and calls
as a '`model`'.
The process of loading a net is also now a subroutine call for consistency with the
Expand All @@ -70,23 +74,25 @@ call torch_model_forward(model, in_tensors, out_tensors)
```

<br>

### n_inputs is no longer required
#### `n_inputs` is no longer required

Previously when you called the forward method on a net you had to specify the number of tensors
in the array of inputs:
```fortran
call torch_model_forward(model, in_tensors, n_inputs, out_tensors)
```

<br>
Now all that is supplied to the forward call is the model, and the arrays of input and
output tensors. No need for `n_inputs` (or `n_outputs`)!
```fortran
call torch_model_forward(model, in_tensors, out_tensors)
```

<br>

### Outputs now need to be an array of `torch_tensor`s
#### Outputs now need to be an array of `torch_tensor`s

Previously you passed an array of `torch_tensor` types as inputs, and a single `torch_tensor`
to the forward method:
Expand All @@ -96,7 +102,7 @@ type(torch_tensor) :: output_tensor
...
call torch_model_forward(model, input_tensor_array, n_inputs, output_tensor)
```

<br>
Now both the inputs and the outputs need to be an array of `torch_tensor` types:
```fortran
type(torch_tensor), dimension(n_inputs) :: input_tensor_array
Expand Down

0 comments on commit 60216fe

Please sign in to comment.