You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the functions 'create_global_model_copy' and 'copy_params,' only the variables of ResNet are copied, excluding BatchNorm layer (BN) statistical information such as 'num_batches_tracked,' 'running_var,' and 'running_mean.' Does this imply that the server does not aggregate this information for the global model? If so, why does the global model from the 1900th iteration contain non-zero values for these statistics?
Additionally, the lines "model = self.helper.local_model" and "self.copy_params(model, global_model_copy)" suggest that the local model is refreshed with global model's variables, excluding BN statistical information. This raises a concern as it implies that all clients share their Batch Normalization statistical information, which appears unusual. Could you please clarify this aspect?
The text was updated successfully, but these errors were encountered:
Hello, thank you for having published your algorithm and the implementation. We are using A3FL for our works as a state-of-the-art attack on several type of architectures.
I have a question for attacker.py:L61-62 : the cosine similarity is computed with the gradient of the layers instead of the weights themselves as described in your paper (Section 4.1: A3FL setup). Is there a specific reason the cosine similarity is computed on the gradients ? Trying to compute the similarity on the weights seems to lead to similar results. Can you confirm that your attack still work when following the paper description (cos sim on the weights) or do you notice that using the gradient is more efficient ?
Thank you for bringing attention to the code.
In the functions 'create_global_model_copy' and 'copy_params,' only the variables of ResNet are copied, excluding BatchNorm layer (BN) statistical information such as 'num_batches_tracked,' 'running_var,' and 'running_mean.' Does this imply that the server does not aggregate this information for the global model? If so, why does the global model from the 1900th iteration contain non-zero values for these statistics?
Additionally, the lines "model = self.helper.local_model" and "self.copy_params(model, global_model_copy)" suggest that the local model is refreshed with global model's variables, excluding BN statistical information. This raises a concern as it implies that all clients share their Batch Normalization statistical information, which appears unusual. Could you please clarify this aspect?
The text was updated successfully, but these errors were encountered: