Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-41719: [C++][Parquet] Cannot read encrypted parquet datasets via _metadata file #41821

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rok
Copy link
Member

@rok rok commented May 24, 2024

Rationale for this change

Metadata written into _metadata file appears to not be encrypted.

What changes are included in this PR?

This adds a code path to encrypt _metadata file and a test.

Are these changes tested?

Yes

Are there any user-facing changes?

This adds user facing encryption_properties parameter to pyarrow.parquet.write_metadata.

Copy link

⚠️ GitHub issue #41719 has been automatically assigned in GitHub to PR creator.

@rok rok force-pushed the gh-41719_fix__metadata_file branch from 7be8f5e to 26cfd63 Compare June 2, 2024 14:12
@rok rok marked this pull request as ready for review June 2, 2024 14:13
@rok rok requested a review from wgtmac as a code owner June 2, 2024 14:13
@rok rok requested review from pitrou and removed request for wgtmac June 2, 2024 14:14
@rok
Copy link
Member Author

rok commented Jun 2, 2024

@AudriusButkevicius could you check if the read metadata contains expected properties?

@AudriusButkevicius
Copy link

AudriusButkevicius commented Jun 2, 2024

Hey, thanks for working on this, looks great, however seems that something is missing.

You can read the metadata file, but seems no files are actually read when reading the dataset via the metadata file, so the table ends up empty (but with the right schema).

(arrow) root@base ~/code/arrow # python testx.py
/tmp/tmp8qr3xv_t
Writing
pyarrow.Table
col1: int64
col2: int64
year: int64
----
col1: [[1,2,3]]
col2: [[1,2,3]]
year: [[2020,2020,2021]]
write done


Reading
pyarrow.Table
col1: int64
col2: int64
year: int16
----
col1: []
col2: []
year: []

Code that reproduces the issue

Test code
import os
import tempfile

import pyarrow.parquet.encryption as pe
import pyarrow.parquet as pq
import pyarrow.dataset as ds
import pyarrow as pa
import base64
import polars as pl


class KmsClient(pe.KmsClient):
    def unwrap_key(self, wrapped_key, master_key_identifier):
        return base64.b64decode(wrapped_key)

    def wrap_key(self, key_bytes, master_key_identifier):
        return base64.b64encode(key_bytes)


def write(location):
    cf = pe.CryptoFactory(lambda *a, **k: KmsClient())
    df = pl.DataFrame({
        "col1": [1, 2, 3],
        "col2": [1, 2, 3],
        "year": [2020, 2020, 2021]
    })
    ecfg = pe.EncryptionConfiguration(
        footer_key="TEST",
        column_keys={
            "TEST": ["col2"]
        },
        double_wrapping=False,
        plaintext_footer=False,
    )
    table = df.to_arrow()
    print("Writing")
    print(table)

    parquet_encryption_cfg = ds.ParquetEncryptionConfig(
        cf, pe.KmsConnectionConfig(), ecfg
    )

    metadata_collector = []

    pq.write_to_dataset(
        table,
        location,
        partitioning=ds.partitioning(
            schema=pa.schema([
                pa.field("year", pa.int16())
            ]),
            flavor="hive"
        ),
        encryption_config=parquet_encryption_cfg,
        metadata_collector=metadata_collector
    )

    encryption_properties = cf.file_encryption_properties(pe.KmsConnectionConfig(), ecfg)

    pq.write_metadata(
        pa.schema(
            field
            for field in table.schema
            if field.name != "year"
        ),
        os.path.join(location, "_metadata"),
        metadata_collector,
        encryption_properties=encryption_properties,
    )
    print("write done")


def read(location):
    decryption_config = pe.DecryptionConfiguration(cache_lifetime=300)
    kms_connection_config = pe.KmsConnectionConfig()
    cf = pe.CryptoFactory(lambda *a, **k: KmsClient())
    parquet_decryption_cfg = ds.ParquetDecryptionConfig(
        cf, kms_connection_config, decryption_config
    )

    decryption_properties = cf.file_decryption_properties(
        kms_connection_config, decryption_config)
    pq_scan_opts = ds.ParquetFragmentScanOptions(
        decryption_config=parquet_decryption_cfg,
        # If using build from master
        decryption_properties=decryption_properties
    )
    pformat = pa.dataset.ParquetFileFormat(default_fragment_scan_options=pq_scan_opts)

    dataset = ds.parquet_dataset(
        os.path.join(location, "_metadata"),
        format=pformat,
        partitioning=ds.partitioning(
            schema=pa.schema([
                pa.field("year", pa.int16())
            ]),
            flavor="hive"
        )
    )
    print("Reading")
    print(dataset.to_table())

if __name__ == '__main__':
    location = tempfile.mkdtemp(suffix=None, prefix=None, dir=None)
    print(location)
    os.makedirs(location, exist_ok=True)
    write(location)
    print("\n")
    read(location)

@AudriusButkevicius
Copy link

Seems that dataset.get_fragments() doesn't return anything.

@rok
Copy link
Member Author

rok commented Jun 2, 2024

Seems that dataset.get_fragments() doesn't return anything.

I think c++ logic as-is doesn't store row groups, let me take a look and get back.

@rok rok added this to the 17.0.0 milestone Jun 2, 2024
@AudriusButkevicius
Copy link

AudriusButkevicius commented Jun 2, 2024

I think the whole premise of _metadata files (not _common_metadata) is to store row group details as well as paths, so when you perform a read via the _metadata file, it knows exactly which files and row groups to read without having to open every file in the dataset. At least this is what happens when encryption is disabled.

@wgtmac
Copy link
Member

wgtmac commented Jun 3, 2024

Although I understand the intention of this issue and the corresponding fix, I don't think the design of parquet encryption has included the _metadata summary file because it may point to several different encrypted parquet files. It would be great if @ggershinsky could advise to see if there is any defection in this use case.

@wgtmac
Copy link
Member

wgtmac commented Jun 3, 2024

BTW, _metadata summary file is something that we strive to deprecate in the new parquet v3 design. If you have any concrete use case, please let us know to see if we can improve. @AudriusButkevicius

@rok rok force-pushed the gh-41719_fix__metadata_file branch from 26cfd63 to e0c9418 Compare June 3, 2024 03:19
@rok
Copy link
Member Author

rok commented Jun 3, 2024

I don't think the design of parquet encryption has included the _metadata summary file because it may point to several different encrypted parquet files.

Why would this be an issue? Because these files might be encrypted with different keys?

@wgtmac
Copy link
Member

wgtmac commented Jun 3, 2024

Yes, these files may have different master keys, which are unable to be referenced by a single footer IMHO.

@rok
Copy link
Member Author

rok commented Jun 3, 2024

But the files can have the same key and decryption would error if not? That sounds ok to me.

@rok
Copy link
Member Author

rok commented Jun 3, 2024

Python error is thrown here, but I'm not sure why.

@AudriusButkevicius
Copy link

AudriusButkevicius commented Jun 3, 2024

My intended use of this is to reduce strain on the filesystem when reading large (many files) datasets from a network attached filesystem, by reading the metadata file instead of many separate files.

I also have a hard requirement for encryption sadly as the data is sensitive.

It would be amazing if this worked with encrypted datasets assuming the key is the same.

I would also be ok storing the metadata in plaintext, perform fragment filtering based on row-group stats, and then re-read and decrypt footers of the chosen files. Obviously this is ok for my usecase but generally might not be ok.

@AudriusButkevicius
Copy link

I didn't get any error when reading, seems that it just returns no data.

@wgtmac
Copy link
Member

wgtmac commented Jun 3, 2024

For the record: the community is inclined to deprecate ColumnChunk.file_path: https://lists.apache.org/thread/qprmj710yq92ybyt89m5xgtqyz3o3st2 and https://github.com/apache/parquet-format/pull/242/files#r1603234502

I'm not sure if we want to support and maintain this. cc @emkornfield @pitrou @mapleFU

@AudriusButkevicius
Copy link

Left my 2c there. Explained why I would be sad if that happened, and would probably have to re-implement the same feature.

@rok
Copy link
Member Author

rok commented Jun 3, 2024

@wgtmac I'm not sure I follow. We already have WriteMetaDataFile to produce _metadata file and if we just add WriteEncryptedMetadataFile to produce encrypted _metadata files we're not really adding additional additional complexity (outside of WriteEncryptedMetadataFile) or am I missing something? :)

@ggershinsky
Copy link
Contributor

Although I understand the intention of this issue and the corresponding fix, I don't think the design of parquet encryption has included the _metadata summary file because it may point to several different encrypted parquet files. It would be great if @ggershinsky could advise to see if there is any defection in this use case.

Yep, we haven't worked on supporting this (basically, there was no requirement; seemed heading towards deprecation).
In general, using different encryption keys for different data files is considered to be a good security practice (mainly because there is a limit on number of crypto operations with one key; also, the key leak scope is smaller) - that's why we generate a fresh key for each parquet file in most of the APIs (Parquet, Arrow, Spark, Iceberg, etc). However, there are obviously some low-level parquet APIs that will allow to pass the same key to many files - if used carefully (making sure, somehow, not to exceed the limit), this might be ok in some cases. The limit is hight (~1 billion pages, somethings like 10TB-1PB of data), but if exceeded, the cipher breaks and the data can be decrypted.
Another option could be to create a separate key for the _metadata summary file, and manage it separately from the data file keys.

@pitrou
Copy link
Member

pitrou commented Jun 3, 2024

mainly because there is a limit on number of crypto operations with one key

What is the theoretical limit, assuming a 256-bit AES key?

Also, if column key encryption is used, wouldn't the limit basically become irrelevant?

@rok rok force-pushed the gh-41719_fix__metadata_file branch from cd5bf41 to a4d58e0 Compare August 6, 2024 21:31
Comment on lines 576 to 600
if (file_encryption_properties->encrypted_footer()) {
PARQUET_THROW_NOT_OK(sink->Write(kParquetEMagic, 4));

PARQUET_ASSIGN_OR_THROW(int64_t position, sink->Tell());
auto metadata_start = static_cast<uint64_t>(position);

auto writer_props = parquet::WriterProperties::Builder()
.encryption(file_encryption_properties)
->build();
auto builder = FileMetaDataBuilder::Make(metadata.schema(), writer_props);

auto footer_metadata = builder->Finish(metadata.key_value_metadata());
auto crypto_metadata = builder->GetCryptoMetaData();
WriteFileCryptoMetaData(*crypto_metadata, sink.get());

auto footer_encryptor = file_encryptor->GetFooterEncryptor();
WriteEncryptedFileMetadata(metadata, sink.get(), footer_encryptor, true);
PARQUET_ASSIGN_OR_THROW(position, sink->Tell());
auto footer_and_crypto_len = static_cast<uint32_t>(position - metadata_start);
PARQUET_THROW_NOT_OK(
sink->Write(reinterpret_cast<uint8_t*>(&footer_and_crypto_len), 4));
PARQUET_THROW_NOT_OK(sink->Write(kParquetEMagic, 4));
} else {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm doing metadata encryption similarly to approach here:

if (file_encryption_properties->encrypted_footer()) {
// encrypted footer
file_metadata_ = metadata_->Finish(key_value_metadata_);
PARQUET_ASSIGN_OR_THROW(int64_t position, sink_->Tell());
uint64_t metadata_start = static_cast<uint64_t>(position);
auto crypto_metadata = metadata_->GetCryptoMetaData();
WriteFileCryptoMetaData(*crypto_metadata, sink_.get());
auto footer_encryptor = file_encryptor_->GetFooterEncryptor();
WriteEncryptedFileMetadata(*file_metadata_, sink_.get(), footer_encryptor, true);
PARQUET_ASSIGN_OR_THROW(position, sink_->Tell());
uint32_t footer_and_crypto_len = static_cast<uint32_t>(position - metadata_start);
PARQUET_THROW_NOT_OK(
sink_->Write(reinterpret_cast<uint8_t*>(&footer_and_crypto_len), 4));
PARQUET_THROW_NOT_OK(sink_->Write(kParquetEMagic, 4));
} else { // Encrypted file with plaintext footer

However it seems decryption fails (see below) when using RowGroup metadata (after deserialization and decryption).
ARROW_ASSIGN_OR_RAISE(auto path,
FileFromRowGroup(filesystem.get(), base_path, *row_group,
options.validate_column_chunk_paths));
This makes me think I'm either not serializing correctly or there's an issue with encryption/decryption properties I'm supplying.

@wgtmac @pitrou does anything obviously wrong stands out here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What error did you get? I suspect it has some issues with row group ordinal. Please search "row group ordinal" from https://github.com/apache/parquet-format/blob/master/Encryption.md

Copy link
Member Author

@rok rok Aug 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The actual error is thrown here:

auto path = row_group.ColumnChunk(0)->file_path();

in debugger I see row_group.ColumnChunk(0) as null and row_group.ColumnChunk(0)->file_path() fails.

The error message I'm getting is:

unknown file: Failure
C++ exception with description "Failed decryption finalization" thrown in the test body.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the test FileMetadata object this reads several files, extracts their metadata and merges them (metadata->AppendRowGroups(*metadata_vector[1])).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could solve this by storing file aad prefixes into key_value_metadata so they can be used at read time to decode row group metadata. This seems to work (see cpp/src/parquet/metadata.cc), but reading actual data with scanner->ToTable() is somewhat less straightforward. but seems doable. Before proceeding, I'd like to ask here:

  • does this approach make sense?
  • what would a good location be for injecting aad prefixes be for reading column data?

cc @wgtmac @ggershinsky

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The format has a field for aad prefixes, https://github.com/apache/parquet-format/blob/master/src/main/thrift/parquet.thrift#L1126 .

It is file-wide, of course. So if parquet files had different aad prefixes, a new one has to be used for the merged metadata.

Per my comment above, I think the only way to make this work is to fully decrypt the footers of the parquet files (using their encryption params, inc keys and aad_prefixes), merge them, and then encrypt the result with a new key/aad_prefix

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting committer review Awaiting committer review labels Aug 6, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch from a4d58e0 to fa448cf Compare August 16, 2024 13:58
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Aug 16, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch from fa448cf to 4348123 Compare August 16, 2024 19:04
@rok rok force-pushed the gh-41719_fix__metadata_file branch from 4348123 to f70a009 Compare August 23, 2024 20:00
@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting change review Awaiting change review labels Sep 11, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch from d784b1e to d764b57 Compare September 26, 2024 22:20
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Sep 26, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch 5 times, most recently from 7c4b5bb to ec833ef Compare September 30, 2024 13:59
@rok rok force-pushed the gh-41719_fix__metadata_file branch from ec833ef to 35b22d6 Compare October 8, 2024 19:27
@rok rok force-pushed the gh-41719_fix__metadata_file branch 3 times, most recently from ec79220 to 366ffc9 Compare October 8, 2024 21:10
@rok rok force-pushed the gh-41719_fix__metadata_file branch from 366ffc9 to c015985 Compare October 8, 2024 21:55
Comment on lines +1109 to +1124
for (size_t i = 0; i < metadata_list.size(); i++) {
const auto& file_metadata = metadata_list[i];
keys.push_back("row_group_aad_" + std::to_string(i));
values.push_back(file_metadata->file_aad());
if (i > 0) {
metadata_list[0]->AppendRowGroups(*file_metadata);
}
}

// Create a new FileMetadata object with the created AADs as key_value_metadata.
auto fmd_builder =
parquet::FileMetaDataBuilder::Make(metadata_list[0]->schema(), writer_props);
const std::shared_ptr<const KeyValueMetadata> file_aad_metadata =
::arrow::key_value_metadata(keys, values);
auto metadata = fmd_builder->Finish(file_aad_metadata);
metadata->AppendRowGroups(*metadata_list[0]);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ggershinsky As proposed I now decrypt all footers and then coalesce them into a single footer. As decrypting data files at read times requires AADs I also store those into key_value_metadata with row_group_aad_{i} keys. Does this seem reasonable design?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rok sorry for the delay, I've been away for a while.

store those into key_value_metadata with row_group_aad_{i} keys.

Do we need to store the aad_prefixes ? Once a footer of a parquet file is decrypted, the file key and aad_prefix can be dropped. Aad_prefix is a user-provided unique ID of a file. So we can(*) generate a new one for the new _metadata file that keeps the coalesced footer. Also, it'd be good to generate a new key. Then the coalesced footer can be encrypted in the _metadata file.

(*) it's possible to write/encrypt the _metadata file without a new aad_prefix, if the user app level doesn't check the file id. You can simply pass a null pointer.

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting change review Awaiting change review labels Oct 8, 2024
Comment on lines +897 to +902
if (key_value_metadata_ && key_value_metadata_->Contains("row_group_aad_0")) {
PARQUET_ASSIGN_OR_THROW(
auto aad,
key_value_metadata_->Get("row_group_aad_" + std::to_string(row_groups[0])));
out->set_file_decryptor_aad(aad);
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pitrou regarding your suggestion to avoid storing AADs as key_value_metadata entries and instead read them from files at dataset scan time - do you have a suggestion on how/where to detect we'll no longer be reading _metadata file but rather data files? If we can do that we can change the decryptor's AAD there and read the desired data file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants