Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Error logging callback #2533

Closed
wants to merge 178 commits into from
Closed
Show file tree
Hide file tree
Changes from 125 commits
Commits
Show all changes
178 commits
Select commit Hold shift + click to select a range
55cf010
implement cot
bmosaicml Aug 22, 2023
91033fa
fix tests
bmosaicml Aug 22, 2023
d9ba6e2
Merge branch 'dev' into add_cot_eval
bmosaicml Aug 23, 2023
3ed0ade
debug print statement
bmosaicml Sep 11, 2023
ec6fc17
prelim commit
bmosaicml Sep 12, 2023
a59b644
fix max answer lengths for cot
bmosaicml Sep 12, 2023
97b1218
add output logger
bmosaicml Sep 12, 2023
7174e75
create eval output logger
bmosaicml Sep 12, 2023
fdbd53b
fix pyright; git push
bmosaicml Sep 12, 2023
909d07b
Merge branch 'dev' into error_logging_callback
bmosaicml Sep 13, 2023
9f4e3d2
change dist reduce fx
bmosaicml Sep 13, 2023
dce297c
Merge branch 'error_logging_callback' of github.com:bmosaicml/compose…
bmosaicml Sep 13, 2023
ea4e7ee
change dist reduce fx
bmosaicml Sep 13, 2023
5630c23
fix pyright
bmosaicml Sep 13, 2023
30623f7
Merge branch 'dev' into error_logging_callback
bmosaicml Sep 13, 2023
e161e33
Add nightly docker image (#2452)
j316chuck Aug 23, 2023
743fbe1
Fix local eval (#2465)
rishab-partha Aug 24, 2023
0c333b6
Add torch 2.1.0 args for github release-docker workflow
j316chuck Aug 24, 2023
da4e19f
Log system metrics on each event (#2412)
prithvikannan Aug 24, 2023
60d3dc6
Fix torch 2.1.0 docker tag (#2472)
j316chuck Aug 24, 2023
15385b2
Upstream Generate Callback (#2449)
irenedea Aug 25, 2023
ec59026
Upgrade torch nightly docker image for 0.18.3 NCCL version (#2476)
j316chuck Aug 25, 2023
a5ec1ac
Test pytorch 2.1.0 docker images on ci/cd (#2469)
j316chuck Aug 25, 2023
145aeb8
Fix huggingface tokenizer loading for slow tokenizers (#2483)
dakinggg Aug 28, 2023
816a61b
Deprecate Fused LayerNorm (#2475)
nik-mosaic Aug 28, 2023
de68763
Transformers upgrade (#2489)
dakinggg Aug 29, 2023
c4488b5
Update RTD build config with build.os (#2490)
bandish-shah Aug 29, 2023
d91fe4d
Upgrade torch docker version and github workflow tests (#2488)
j316chuck Aug 29, 2023
3a9706d
upgrade node version (#2492)
j316chuck Aug 29, 2023
ee67e99
Gating tying modules w/ FSDP for torch 2.0 (#2467)
bcui19 Aug 30, 2023
99b98ef
Removing min_params (#2494)
bcui19 Aug 30, 2023
91d961d
Fix torchmetrics backwards compatibility issue (#2468)
eracah Aug 31, 2023
6add304
Adding some fixes to FSDP tests (#2495)
bcui19 Aug 31, 2023
b5e0950
fail count (#2496)
mvpatel2000 Aug 31, 2023
8e106a6
Remove PR curve metrics from backward compatibility test and skip tor…
eracah Aug 31, 2023
fc6c995
filter warning (#2500)
mvpatel2000 Sep 1, 2023
ac60704
bump version (#2498)
mvpatel2000 Sep 1, 2023
9274a77
Skip metrics in state dict (#2501)
mvpatel2000 Sep 1, 2023
d7b49c7
Add peak memory stats (#2504)
mvpatel2000 Sep 1, 2023
c24d60d
fix sharded ckpt (#2505)
mvpatel2000 Sep 1, 2023
4e50192
Bump gitpython from 3.1.31 to 3.1.34 (#2509)
dependabot[bot] Sep 5, 2023
90e8bf2
Annotate `torch_prof_remote_file_name` as Optional (#2512)
srstevenson Sep 5, 2023
dac9054
fix: when there is no train_metrics, do not checkpoint (#2502)
furkanbiten Sep 5, 2023
8dfa2db
Remove metric saving (#2514)
mvpatel2000 Sep 7, 2023
c8f3ecd
Fix daily tests by removing gpu marker (#2515)
j316chuck Sep 7, 2023
284c1b7
Refactor mosaic_fsdp.py (#2506)
b-chu Sep 7, 2023
c507e30
fix pr (#2517)
mvpatel2000 Sep 7, 2023
303b7c3
Add custom sharding to ChunkShardingSpec (#2507)
b-chu Sep 8, 2023
3a19321
Update nightly docker image to torch nightly 09-03-23 (#2518)
j316chuck Sep 8, 2023
4ca8f5a
Update pre-commit in setup.py (#2522)
b-chu Sep 8, 2023
c1f87f7
Add FSDP custom wrap with torch 2.1 (#2460)
mvpatel2000 Sep 8, 2023
decf2b2
Fix GCSObjectStore bug where hmac keys auth doesn't work (#2519)
eracah Sep 9, 2023
b521207
prelim commit
bmosaicml Sep 12, 2023
3b09be7
add output logger
bmosaicml Sep 12, 2023
5697e1f
create eval output logger
bmosaicml Sep 12, 2023
2e01b89
change dist reduce fx
bmosaicml Sep 13, 2023
b3b1377
Bump gitpython from 3.1.34 to 3.1.35 (#2525)
dependabot[bot] Sep 12, 2023
1f5012b
Bump pytest from 7.4.0 to 7.4.2 (#2523)
dependabot[bot] Sep 12, 2023
6bdc53e
Upgrade to mlflow version 2.5.0 (#2528)
ngcgarcia Sep 12, 2023
1818b51
disable cifar daily (#2527)
mvpatel2000 Sep 12, 2023
13d411e
mosaicml logger robustness improvements (#2530)
mvpatel2000 Sep 12, 2023
51650ff
Fix metrics keys sort in DecoupledAdamW for OptimizerMonitor FSDP met…
m1kol Sep 12, 2023
c780740
Fix github actions for GCS integration testing (#2532)
mvpatel2000 Sep 13, 2023
17953f4
change dist reduce fx
bmosaicml Sep 13, 2023
cb0ce0e
fix pyright
bmosaicml Sep 13, 2023
f2dd81f
Fix GCS tests (#2535)
mvpatel2000 Sep 13, 2023
8fea658
merge
bmosaicml Sep 14, 2023
75260fc
finish error logging cb
bmosaicml Sep 14, 2023
8bb395f
Merge branch 'dev' into error_logging_callback
bmosaicml Sep 14, 2023
fada3b5
fix
bmosaicml Sep 14, 2023
0540383
Merge branch 'dev' into error_logging_callback
bmosaicml Sep 18, 2023
0e6e6d8
add import to init
bmosaicml Sep 18, 2023
3668c29
Merge branch 'error_logging_callback' of github.com:bmosaicml/compose…
bmosaicml Sep 18, 2023
c653090
add import to init
bmosaicml Sep 18, 2023
b8be3a2
add import to init
bmosaicml Sep 18, 2023
f309785
add file writing
bmosaicml Sep 18, 2023
33b35af
add file writing
bmosaicml Sep 18, 2023
1a0ef89
add file writing
bmosaicml Sep 18, 2023
8aa77f0
add file writing
bmosaicml Sep 18, 2023
1b7e6db
add file writing
bmosaicml Sep 18, 2023
9c75b53
move tensors to cpu
bmosaicml Sep 19, 2023
7a41a01
remove tensors
bmosaicml Sep 19, 2023
e5c8b61
remove tensors
bmosaicml Sep 19, 2023
a33cbd9
remove tensors
bmosaicml Sep 20, 2023
fa88a05
add prompt to qa
bmosaicml Sep 20, 2023
8111682
add prompt to qa
bmosaicml Sep 20, 2023
6e651fd
add prompt to qa
bmosaicml Sep 20, 2023
501bc0c
add prompt to qa
bmosaicml Sep 20, 2023
0116903
add prompt to qa
bmosaicml Sep 20, 2023
5fa5957
add prompt to qa
bmosaicml Sep 20, 2023
5ffb804
add prompt to qa
bmosaicml Sep 20, 2023
605f437
add prompt to qa
bmosaicml Sep 20, 2023
afaa437
add prompt to qa
bmosaicml Sep 20, 2023
b772029
add prompt to qa
bmosaicml Sep 20, 2023
6f8e0d7
add prompt to qa
bmosaicml Sep 20, 2023
92779c4
add prompt to qa
bmosaicml Sep 21, 2023
1ec300e
add prompt to qa
bmosaicml Sep 21, 2023
cf943f4
add prompt to qa
bmosaicml Sep 22, 2023
be859bb
try debugging dist sync issue
bmosaicml Sep 25, 2023
8ab2b04
nit
bmosaicml Sep 25, 2023
a6999aa
debugging
bmosaicml Sep 25, 2023
828ceec
debugging
bmosaicml Sep 25, 2023
b98ffe6
debugging
bmosaicml Sep 25, 2023
0c61063
debugging
jcd2020 Sep 26, 2023
72a4f2b
debugging
jcd2020 Sep 26, 2023
8cf8829
debugging
jcd2020 Sep 26, 2023
484510c
debugging
jcd2020 Sep 26, 2023
07cbebf
debugging
jcd2020 Sep 26, 2023
e6af285
debugging
jcd2020 Sep 26, 2023
6f55ff5
debugging
jcd2020 Sep 26, 2023
e0d80ab
debugging
jcd2020 Sep 26, 2023
177b935
debugging
jcd2020 Sep 26, 2023
dcfa6de
debugging
jcd2020 Sep 26, 2023
cce1fb0
debugging
jcd2020 Sep 26, 2023
10ab1ca
debugging
jcd2020 Sep 26, 2023
3b3fd26
fix syncing of non tensor state
jcd2020 Sep 26, 2023
ab6d797
Merge branch 'dev' into error_logging_callback
bmosaicml Nov 8, 2023
6266eeb
added gpu test
bmosaicml Nov 8, 2023
cd1fc58
merge
bmosaicml Nov 8, 2023
76882cb
fix error
bmosaicml Nov 9, 2023
2855e1f
finish testing callback
bmosaicml Nov 15, 2023
29a5803
fix all errors
bmosaicml Nov 16, 2023
f56c9de
Merge branch 'dev' into error_logging_callback
bmosaicml Nov 16, 2023
57133a4
test commit
tbarton16 Nov 21, 2023
7196028
roll back test commit
tbarton16 Nov 21, 2023
4410203
Merge branch 'dev' into error_logging_callback
bmosaicml Nov 27, 2023
21e322e
remove ranks
bmosaicml Nov 27, 2023
e4eb7ee
Merge branch 'error_logging_callback' of github.com:bmosaicml/compose…
bmosaicml Nov 27, 2023
61447a2
Merge branch 'error_logging_callback' of github.com:bmosaicml/compose…
bmosaicml Nov 27, 2023
d69bdba
re-tesT
bmosaicml Nov 27, 2023
d999b68
Merge branch 'error_logging_callback' of github.com:bmosaicml/compose…
bmosaicml Nov 27, 2023
c030717
Merge branch 'dev' into error_logging_callback
bmosaicml Dec 5, 2023
ca4d3c4
add custome gen kwargs and stopping on eos token
bmosaicml Dec 14, 2023
1e39623
modify test
bmosaicml Dec 14, 2023
09af753
modify test
bmosaicml Dec 14, 2023
47f3c91
Merge branch 'dev' into pass_on_custom_generation_kwargs
bmosaicml Dec 15, 2023
9f9a6bc
Merge branch 'pass_on_custom_generation_kwargs' into error_logging_ca…
bmosaicml Dec 15, 2023
a3501e9
finish
bmosaicml Dec 18, 2023
fadce0e
finish
bmosaicml Dec 18, 2023
92157da
finish
bmosaicml Dec 18, 2023
d137bbc
finish
bmosaicml Dec 18, 2023
b25da3f
Merge branch 'dev' into pass_on_custom_generation_kwargs
bmosaicml Dec 18, 2023
909ed63
finish pr
bmosaicml Dec 20, 2023
e263b5b
Merge branch 'pass_on_custom_generation_kwargs' of github.com:bmosaic…
bmosaicml Dec 20, 2023
32d6668
Merge branch 'dev' into pass_on_custom_generation_kwargs
bmosaicml Dec 20, 2023
4ff16b4
implement early stop
bmosaicml Dec 20, 2023
7ee0a72
Merge branch 'pass_on_custom_generation_kwargs' into add_custom_stopp…
bmosaicml Dec 20, 2023
bcf002e
Merge branch 'add_custom_stopping_criteria' into error_logging_callback
bmosaicml Dec 20, 2023
83a60b7
add tesT
bmosaicml Dec 20, 2023
a031772
Merge branch 'pass_on_custom_generation_kwargs' into add_custom_stopp…
bmosaicml Dec 20, 2023
e5943d6
merge update
bmosaicml Dec 20, 2023
67b4685
Merge branch 'add_custom_stopping_criteria' of github.com:bmosaicml/c…
bmosaicml Dec 20, 2023
be32781
Merge branch 'add_custom_stopping_criteria' into error_logging_callback
bmosaicml Dec 20, 2023
e512a21
merge
bmosaicml Dec 20, 2023
a1af91a
fix
bmosaicml Dec 22, 2023
5f23b3e
finish
bmosaicml Dec 23, 2023
42fb431
finish
bmosaicml Dec 23, 2023
aa05076
fix bug
bmosaicml Dec 23, 2023
076731d
Merge branch 'add_custom_stopping_criteria' into error_logging_callback
bmosaicml Dec 23, 2023
89669c6
finish
bmosaicml Dec 23, 2023
95a7d28
Merge branch 'error_logging_callback' of github.com:bmosaicml/compose…
bmosaicml Dec 23, 2023
dce4ef0
bug fix
bmosaicml Dec 23, 2023
cb3c69d
add keys
bmosaicml Dec 26, 2023
cea85d4
Merge branch 'add_custom_stopping_criteria' into error_logging_callback
bmosaicml Dec 26, 2023
7371e66
add correcT
bmosaicml Dec 26, 2023
c7f5198
modify sync
bmosaicml Dec 26, 2023
786c64c
diff split
bmosaicml Dec 26, 2023
559beee
Merge branch 'add_custom_stopping_criteria' into error_logging_callback
bmosaicml Dec 26, 2023
7f20954
fix typo
bmosaicml Dec 26, 2023
bd10cdd
Merge branch 'add_custom_stopping_criteria' into error_logging_callback
bmosaicml Dec 26, 2023
adf5bab
edit condition
bmosaicml Dec 26, 2023
4afd292
Merge branch 'dev' into error_logging_callback
bmosaicml Jan 18, 2024
b674b85
Merge branch 'dev' into error_logging_callback
bmosaicml Jan 22, 2024
059d071
fix
bmosaicml Jan 22, 2024
a674dfc
fix
bmosaicml Jan 23, 2024
e986ca3
fix
bmosaicml Jan 23, 2024
ef56d03
fix
bmosaicml Jan 23, 2024
936ebfc
fix
bmosaicml Jan 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions composer/callbacks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from composer.callbacks.activation_monitor import ActivationMonitor
from composer.callbacks.checkpoint_saver import CheckpointSaver
from composer.callbacks.early_stopper import EarlyStopper
from composer.callbacks.eval_output_logging_callback import EvalOutputLogging
from composer.callbacks.export_for_inference import ExportForInferenceCallback
from composer.callbacks.free_outputs import FreeOutputs
from composer.callbacks.generate import Generate
Expand All @@ -34,6 +35,7 @@
'CheckpointSaver',
'MLPerfCallback',
'EarlyStopper',
'EvalOutputLogging',
'ExportForInferenceCallback',
'ThresholdStopper',
'ImageVisualizer',
Expand Down
148 changes: 148 additions & 0 deletions composer/callbacks/eval_output_logging_callback.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
# Copyright 2022 MosaicML Composer authors
# SPDX-License-Identifier: Apache-2.0

"""Log model outputs and expected outputs during ICL evaluation."""

import hashlib
import os
import random
import shutil
import time
from typing import Callable, Optional

from torch.utils.data import DataLoader

from composer.core import Callback, State
from composer.datasets.in_context_learning_evaluation import (InContextLearningCodeEvalDataset,
InContextLearningLMTaskDataset,
InContextLearningMultipleChoiceTaskDataset,
InContextLearningQATaskDataset,
InContextLearningSchemaTaskDataset)
from composer.loggers import Logger
from composer.loggers.console_logger import ConsoleLogger
from composer.utils import MissingConditionalImportError, dist, maybe_create_object_store_from_uri, parse_uri

ICLDatasetTypes = (InContextLearningLMTaskDataset, InContextLearningQATaskDataset,
InContextLearningMultipleChoiceTaskDataset, InContextLearningSchemaTaskDataset,
InContextLearningCodeEvalDataset)


def _write(destination_path, src_file):
obj_store = maybe_create_object_store_from_uri(destination_path)
_, _, save_path = parse_uri(destination_path)
if obj_store is not None:
obj_store.upload_object(object_name=save_path, filename=src_file)
else:
with dist.local_rank_zero_download_and_wait(destination_path):
if dist.get_local_rank() == 0:
shutil.copy(src_file, destination_path)


class EvalOutputLogging(Callback):
"""Logs eval outputs for each sample of each ICL evaluation dataset.

ICL metrics are required to support caching the model's responses including information on whether model was correct.
Metrics are also responsible for providing a method for rendering the cached responses as strings.
This callback then accesses each eval benchmark during eval_end, retrieves the cached results,
and renders and and logs them in tabular format.

If subset_sample > 0, then only `subset_sample` of the outputs will be logged.

output_directory indicates where to write the tsv results, either can be a local directory or a cloud storage directory.
"""

def __init__(self, subset_sample: int = -1, output_directory: Optional[str] = None):
self.subset_sample = subset_sample
self.table = {}
self.output_directory = output_directory if output_directory else os.getcwd()
self.hash = hashlib.sha256()
self.destination_file = None

def _write_tables_to_output_dir(self, state: State):
try:
import pandas as pd
except ImportError as e:
raise MissingConditionalImportError(extra_deps_group='pandas',
conda_package='pandas',
conda_channel='conda-forge') from e
# write tmp files
self.hash.update((str(time.time()) + str(random.randint(0, 1_000_000))).encode('utf-8'))
tmp_dir = os.getcwd() + '/' + self.hash.hexdigest()
bmosaicml marked this conversation as resolved.
Show resolved Hide resolved

if not os.path.exists(tmp_dir):
with dist.local_rank_zero_download_and_wait(tmp_dir):
if dist.get_local_rank() == 0:
os.mkdir(tmp_dir)

full_df = pd.DataFrame()
file_name = f'eval-outputs-ba{state.timestamp.batch.value}.tsv'

for benchmark in self.table:
cols, rows = self.table[benchmark]
rows = [[e.encode('unicode_escape') if isinstance(e, str) else e for e in row] for row in rows]
df = pd.DataFrame.from_records(data=rows, columns=cols)
df['benchmark'] = benchmark
full_df = pd.concat([full_df, df], ignore_index=True)

with dist.local_rank_zero_download_and_wait(f'{tmp_dir}/{file_name}'):
if dist.get_local_rank() == 0:
with open(f'{tmp_dir}/{file_name}', 'wb') as f:
full_df.to_csv(f, sep='\t', index=False)

# copy/upload tmp files
_write(destination_path=f'{self.output_directory}/{file_name}', src_file=f'{tmp_dir}/{file_name}')
os.remove(f'{tmp_dir}/{file_name}')
self.destination_file = f'{self.output_directory}/{file_name}'

# delete tmp files
os.rmdir(tmp_dir)

def _prep_response_cache(self, state, cache):
benchmark = state.dataloader_label
for metric in state.eval_metrics[benchmark].values():
if hasattr(metric, 'reset_response_cache'):
metric.reset_response_cache(cache)

def eval_start(self, state: State, logger: Logger) -> None:
# eval start runs before each benchmark's evaluator (either in training or eval)
self._prep_response_cache(state, True)

def eval_after_all(self, state: State, logger: Logger) -> None:
# eval after all runs after all evaluators have completed during eval within training
# (either in training or eval)
self._write_tables_to_output_dir(state)
self.table = {}

def eval_standalone_end(self, state: State, logger: Logger) -> None:
# eval standalone end runs after all evaluators have completed during a direct call to trainer.eval()
self._write_tables_to_output_dir(state)
self.table = {}

def eval_end(self, state: State, logger: Logger) -> None:
# eval start runs after each benchmark's evaluator
# during each eval, only a single dataloader/benchmark will be active
assert state.dataloader is not None
assert isinstance(state.dataloader, DataLoader)
if hasattr(state.dataloader, 'dataset') and isinstance(state.dataloader.dataset, ICLDatasetTypes):
assert isinstance(state.dataloader.dataset, ICLDatasetTypes)
if hasattr(state.dataloader.dataset, 'tokenizer'):
bmosaicml marked this conversation as resolved.
Show resolved Hide resolved
tokenizer = state.dataloader.dataset.tokenizer
benchmark = state.dataloader_label
assert benchmark is not None
assert isinstance(benchmark, str)
for metric_name, metric in state.eval_metrics[benchmark].items():
if hasattr(metric, 'format_response_cache'):
assert isinstance(metric.format_response_cache, Callable)
format_response_cache: Callable = metric.format_response_cache
columns, rows = format_response_cache(tokenizer)

if columns is not None and rows is not None:
if self.subset_sample > 0:
rows = random.sample(rows, min(len(rows), self.subset_sample))
for destination in logger.destinations:
if not isinstance(destination, ConsoleLogger):
# don't log to console because it will pollute the console too much
destination.log_table(columns, rows, f'icl_outputs/{benchmark}/{metric_name}')

self.table[f'{benchmark}_{metric_name}'] = (columns, rows)
self._prep_response_cache(state, False)
4 changes: 3 additions & 1 deletion composer/datasets/in_context_learning_evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,7 @@ def __init__(
if dist.get_local_rank() == 0:
get_file(dataset_uri, destination_path, overwrite=True)
dataset = load_dataset('json', data_files=destination_path, split='train', streaming=False)

self.samples = self._read_dataset(dataset)
self.samples = strip_data(self.samples)
self.tokenizer = tokenizer
Expand Down Expand Up @@ -225,6 +226,7 @@ def _prep_examples(self,
example_delimiter (str): The delimiter used to separate each individual context/continuation pair
continuation_delimiter (str): The delimiter used to separate each context from its continuation
question_prelimiter (str): The text to prepend to each question
cot_delimiter (str): The delimiter used to separate the chain-of-thought (if present) from the final model response.
fewshot_rng (random.Random): Random number generator to use for fewshot sampling
cot_delimiter (str): The delimiter used to separate the chain-of-thought (if present) from the final model response.

Expand All @@ -241,7 +243,6 @@ def _prep_examples(self,
prompt_and_fewshot = self._format_prompt_and_fewshot(num_fewshot, prompt_string, example_delimiter,
continuation_delimiter, question_prelimiter,
cot_delimiter, fewshot_rng, sample_idx)

ctxt = self.samples[sample_idx]['context']
ctxt = f'{question_prelimiter}{ctxt}'
if len(prompt_and_fewshot) > 0:
Expand All @@ -260,6 +261,7 @@ def _prep_examples(self,
encoded_example['preamble']['input_ids'] = encoded_example['preamble']['input_ids'][:-1]

encoded_example['context'] = self.tokenizer(ctxt, add_special_tokens=False)

encoded_example['aliases'] = list(self.samples[sample_idx]['aliases'])
encoded_example['cot_delimiter'] = cot_delimiter
examples.append(encoded_example)
Expand Down
4 changes: 3 additions & 1 deletion composer/loggers/in_memory_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,9 @@ def log_table(self, columns: List[str], rows: List[List[Any]], name: str = 'Tabl
raise MissingConditionalImportError(extra_deps_group='pandas',
conda_package='pandas',
conda_channel='conda-forge') from e
table = pd.DataFrame.from_records(data=rows, columns=columns).to_json(orient='split', index=False)
table = pd.DataFrame.from_records(data=rows, columns=columns).to_json(orient='split',
index=False,
force_ascii=False)
self.tables[name] = table

def log_metrics(self, metrics: Dict[str, Any], step: Optional[int] = None) -> None:
Expand Down
Loading
Loading