-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retrieve image text pairs #14
Open
sahel-sh
wants to merge
31
commits into
TIGER-AI-Lab:main
Choose a base branch
from
sahel-sh:retrieve_image_text_pairs
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
31 commits
Select commit
Hold shift + click to select a range
bdc8649
Add a raw retrieval option to store queries and their retrieved candi…
sahel-sh 1104db5
Add interactive retriever
sahel-sh 1017242
Merge branch 'main' into raw_retrieval
sahel-sh f6d7659
add retrieval of image-text pairs to retrieval config yaml
sahel-sh cfeaaf5
left a todo for retrieving complementary candidates
sahel-sh b4bde72
Merge branch 'raw_retrieval' into interactive_retrieval
sahel-sh 1a3b79a
retrieve complement candidates
sahel-sh 5035a49
Merge branch 'main' into raw_retrieval
sahel-sh 9b2e01b
reformated with 120 chars
sahel-sh d8f81bf
reformatted with 120
sahel-sh 7c568c6
reformatted with 120
sahel-sh e1c4915
fix retrieved candidates path
sahel-sh de8a73c
Merge branch 'raw_retrieval' into interactive_retrieval
sahel-sh 70a0145
Merge branch 'interactive_retrieval' into retrieve_image_text_pairs
sahel-sh 9957ef3
fixed query embedder config
sahel-sh 938e53f
fix distributed settings
sahel-sh a95131c
skip getting complements for candidates with text,image modality
sahel-sh db3222e
fix typpo
sahel-sh 5cc9370
refactor raw retrieval
sahel-sh 9e5ec40
refactor interactive_retriever
sahel-sh 424ae26
refactored raw retrieval
sahel-sh b4acd11
Add a todo for image-txt retrieval
sahel-sh 160fea0
add default value for not to break the existing calls
sahel-sh 46222df
merge with raw_retrieval
sahel-sh 410ffd3
update requirements
sahel-sh 7323f36
temp commit
sahel-sh 65a6b08
temp fix for complement retriever
sahel-sh 72871c1
add complement candidates
sahel-sh 2e21807
Merge branch 'main' into retrieve_image_text_pairs
sahel-sh c62880b
addressed review comments
sahel-sh b25ddbc
polish readme
sahel-sh File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,211 @@ | ||
""" | ||
Retrieves candidates for a given set of queries after embedding them. | ||
""" | ||
|
||
from enum import Enum | ||
import gc | ||
import json | ||
import os | ||
|
||
import numpy as np | ||
import torch | ||
import torch.distributed as dist | ||
from torch.utils.data import DataLoader | ||
from torch.nn.parallel import DistributedDataParallel as DDP | ||
|
||
from data.mbeir_dataset import ( | ||
MBEIRInferenceOnlyDataset, | ||
MBEIRInferenceOnlyCollator, | ||
) | ||
import dist_utils | ||
from dist_utils import ContiguousDistributedSampler | ||
from mbeir_embedder import generate_embeds_and_ids_for_dataset_with_gather | ||
from utils import build_model_from_config, set_seed | ||
from data.preprocessing.utils import unhash_did, DATASET_IDS, MBEIR_TASK | ||
|
||
|
||
class Modality(Enum): | ||
TEXT = "text" | ||
IMAGE = "image" | ||
IMAGE_TEXT = "image,text" | ||
|
||
|
||
class InteractiveRetriever: | ||
def __init__(self, cand_index_path: str, candidates_path: str, dataset_name, config): | ||
# Set up seed for reproducibility | ||
seed = config.seed + dist_utils.get_rank() | ||
set_seed(seed) | ||
self.dataset_id = DATASET_IDS[dataset_name] | ||
# Setup query embedder | ||
model = build_model_from_config(config) | ||
model.eval() | ||
|
||
# Ensure the model has an 'encode' method before generating embeddings | ||
if not callable(getattr(model, "encode_mbeir_batch")): | ||
raise AttributeError("The provided model does not have a callable 'encode' method.") | ||
if not callable(getattr(model, "get_img_preprocess_fn")): | ||
raise AttributeError("The provided model does not have an 'img_preprocess_fn' attribute.") | ||
if not callable(getattr(model, "get_tokenizer")): | ||
raise AttributeError("The provided model does not have a 'tokenizer' attribute.") | ||
self.img_preprocess_fn = model.get_img_preprocess_fn() | ||
self.tokenizer = model.get_tokenizer() | ||
|
||
# Enable distributed data parallel | ||
model = model.to(config.dist_config.gpu_id) | ||
if config.dist_config.distributed_mode: | ||
model = DDP(model, device_ids=[config.dist_config.gpu_id]) | ||
self.model = model | ||
print(f"Model is set up on GPU {config.dist_config.gpu_id}.") | ||
|
||
self.cand_index_path = cand_index_path | ||
self.config = config | ||
self.queries = [] | ||
|
||
# Load did_to_candidates | ||
self.did_to_candidates = {} | ||
with open(candidates_path, "r") as f: | ||
for l in f: | ||
c = json.loads(l.strip()) | ||
assert c["did"] not in self.did_to_candidates, "dids must be unique" | ||
self.did_to_candidates[c["did"]] = c | ||
|
||
def add_queries(self, queries: list[tuple[str, str, str, str]]): | ||
for query_modality, query_txt, query_img_path, candidate_modality in queries: | ||
if query_modality == Modality.TEXT.value: | ||
assert query_txt, "Query with 'text' modality must have non-null 'query_txt'" | ||
assert query_img_path is None, "Query with 'text' modality must have null 'query_img_path'" | ||
elif query_modality == Modality.IMAGE.value: | ||
assert query_txt is None, "Query with 'image' modality must have null 'query_txt'" | ||
assert query_img_path, "Query with 'image' modality must have non-null 'query_img_path'" | ||
elif query_modality == Modality.IMAGE_TEXT.value: | ||
assert query_txt, "Query with 'image' modality must have non-null 'query_txt'" | ||
assert query_img_path, "Query with 'image' modality must have non-null 'query_img_path'" | ||
else: | ||
raise ValueError("Only 'text', 'image' and 'image,text' query modalities are supported.") | ||
task_id = MBEIR_TASK[" -> ".join([query_modality, candidate_modality])] | ||
self.queries.append( | ||
{ | ||
# Hardcoded qid in format of dataset_id:query_num. | ||
"qid": ":".join([str(self.dataset_id), str(len(self.queries) + 1)]), | ||
"query_modality": query_modality, | ||
"query_txt": query_txt, | ||
"query_img_path": query_img_path, | ||
"task_id": task_id, | ||
"candidate_modality": candidate_modality, | ||
} | ||
) | ||
|
||
def _embed_queries(self): | ||
mbeir_data_dir = self.config.mbeir_data_dir | ||
embed_config = self.config.embed_config | ||
|
||
# Config for dataset | ||
data_config = self.config.data_config | ||
query_instruct_path = data_config.query_instruct_path | ||
image_size = tuple(map(int, data_config.image_size.split(","))) | ||
|
||
print_config = False | ||
if dist_utils.is_main_process(): | ||
print(f"\nEmbedder Log: Generating embeddings for {len(self.queries)} queries.") | ||
print_config = True | ||
|
||
dataset = MBEIRInferenceOnlyDataset( | ||
mbeir_data_dir, | ||
self.queries, | ||
query_instruct_path, | ||
self.img_preprocess_fn, | ||
enable_query_instruct=data_config.enable_query_instruct, | ||
print_config=print_config, | ||
) | ||
collator = MBEIRInferenceOnlyCollator( | ||
tokenizer=self.tokenizer, | ||
image_size=image_size, | ||
) | ||
|
||
# Config for data loader | ||
batch_size = self.config.dataloader_config.batch_size | ||
num_workers = self.config.dataloader_config.num_workers | ||
|
||
# Set up distributed data parallel | ||
num_tasks = dist_utils.get_world_size() | ||
global_rank = dist_utils.get_rank() | ||
sampler = ContiguousDistributedSampler( | ||
dataset, | ||
num_replicas=num_tasks, | ||
rank=global_rank, | ||
) # Note: assume the dataset is in sorted order. | ||
data_loader = DataLoader( | ||
dataset, | ||
batch_size=batch_size, | ||
num_workers=num_workers, | ||
pin_memory=True, | ||
sampler=sampler, | ||
shuffle=False, # Since we have distributed sampler, we don't need to shuffle the data here. | ||
collate_fn=collator, | ||
drop_last=False, | ||
) | ||
if dist.is_initialized(): | ||
dist.barrier() # Wait for rank 0 to finish saving the embeddings and ids. | ||
if dist_utils.is_main_process(): | ||
print(f"Embedder Log: Data loader is set up.") | ||
print(f"Embedder Log: Generating embeddings for {len(self.queries)} queries ...") | ||
print(f"Inference with half precision: {embed_config.use_fp16}") | ||
|
||
# Generate embeddings and ids | ||
embedding_list, id_list = generate_embeds_and_ids_for_dataset_with_gather( | ||
self.model, | ||
data_loader, | ||
device=self.config.dist_config.gpu_id, | ||
use_fp16=embed_config.use_fp16, | ||
) | ||
|
||
# Save the embeddings to a temprary .npy | ||
if not dist.is_initialized() or dist.get_rank() == 0: | ||
print(f"Embedder Log: Embedding list length: {len(embedding_list)}") | ||
print(f"Embedder Log: ID list length: {len(id_list)}") | ||
|
||
# Save the embeddings to .npy | ||
self.embed_file = "interactive_queries_embed.npy" | ||
np.save(self.embed_file, embedding_list) | ||
print(f"Embedder Log: Saved embeddings to {self.embed_file}.") | ||
|
||
if dist.is_initialized(): | ||
dist.barrier() # Wait for rank 0 to finish saving the embeddings and ids. | ||
|
||
# Delete the embeddings and IDs to free up memory | ||
del embedding_list | ||
del id_list | ||
del data_loader | ||
del dataset | ||
del collator | ||
del sampler | ||
|
||
# Explicitly call the garbage collector | ||
gc.collect() | ||
torch.cuda.empty_cache() | ||
|
||
def retrieve(self, k: int = 1, batch_size: int = 100): | ||
results = [] | ||
self._embed_queries() | ||
# retrieve skipping the eval | ||
from mbeir_retriever import search_index | ||
|
||
print(f"Retriever: Searching with k={k}") | ||
_, retrieved_indices = search_index( | ||
self.embed_file, | ||
self.cand_index_path, | ||
batch_size=batch_size, | ||
num_cand_to_retrieve=k, | ||
) | ||
|
||
for indices in retrieved_indices: | ||
retrieved_cands = [] | ||
for hashed_doc_id in indices: | ||
doc_id = unhash_did(hashed_doc_id) | ||
retrieved_cands.append(self.did_to_candidates[doc_id]) | ||
results.append(retrieved_cands) | ||
|
||
# Remove the temprarily stored embeddings | ||
os.remove(self.embed_file) | ||
|
||
return results |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that the InteractiveRetriever requires a pre-built candidate index file to function correctly. To assist users with this setup, could we consider adding a script, such as
run_interactive_retriever_pipeline.sh
, that demonstrates the entire pipeline? This script would cover embedding, indexing, and loading the index for the interactive retriever and retrieve demo queries. Additionally, incorporating a step-by-step guide in the README could greatly enhance the user experience.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, I created
unirag
folder next toinbatch
for BLIP_FF Large and CLIP_SF Large. It has embed, index, and retrieval configs and the run script as your requested.