Skip to content

Commit

Permalink
Don't display Cache Load Factor if compute kernel is not uvm caching (#…
Browse files Browse the repository at this point in the history
…2529)

Summary:
Pull Request resolved: #2529

This statistic may add confusion unless the `fused_uvm_caching` kernel is being
used, see also: T187360685

Reviewed By: PaulZhang12

Differential Revision: D65231346

fbshipit-source-id: f5410d1183e5682b82a0256ae2eddb38b9f1c767
  • Loading branch information
Keyan Pishdadian authored and facebook-github-bot committed Nov 6, 2024
1 parent 7e867ad commit 509b0d2
Showing 1 changed file with 9 additions and 5 deletions.
14 changes: 9 additions & 5 deletions torchrec/distributed/planner/stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@

from torch import nn

from torchrec.distributed.embedding_types import EmbeddingComputeKernel
from torchrec.distributed.planner.constants import BIGINT_DTYPE, NUM_POOLINGS
from torchrec.distributed.planner.shard_estimators import _calculate_shard_io_sizes
from torchrec.distributed.planner.storage_reservations import (
Expand Down Expand Up @@ -421,11 +422,14 @@ def log(
if hasattr(sharder, "fused_params") and sharder.fused_params
else None
)
cache_load_factor = str(
so.cache_load_factor
if so.cache_load_factor is not None
else sharder_cache_load_factor
)
cache_load_factor = "None"
# Surfacing cache load factor does not make sense if not using uvm caching.
if so.compute_kernel == EmbeddingComputeKernel.FUSED_UVM_CACHING.value:
cache_load_factor = str(
so.cache_load_factor
if so.cache_load_factor is not None
else sharder_cache_load_factor
)
hash_size = so.tensor.shape[0]
param_table.append(
[
Expand Down

0 comments on commit 509b0d2

Please sign in to comment.