-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server tests #502
Server tests #502
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
These endpoints are not used, and don't seem to help in our workflow. No need to keep maintaining them
PUT now is only permitted to the admin users We need to define what can be edited and how to sync with Auth0 before exposing this to the end users
Users were able to edit results dict and associated entities
users have to include hashes for URLs on mlcube creation but they were able to set it blank later
make dataset association private, return necessary fields to help the client know the latest approval status
this field was updated for any request containing approval_status
26507cc
to
b3d7f7f
Compare
cli/medperf/comms/rest.py
Outdated
@@ -183,8 +183,10 @@ def get_benchmark_models(self, benchmark_uid: int) -> List[int]: | |||
Returns: | |||
list[int]: List of model UIDS | |||
""" | |||
models = self.__get_list(f"{self.server_url}/benchmarks/{benchmark_uid}/models") | |||
model_uids = [model["id"] for model in models] | |||
# TODO: filter to find approved ones only if we decide to have |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we handle filtering within the comms layer? Or within the entity layer? I think we've done filtering on the entity layer so far, because we also need to apply filtering to local entities
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's tricky for associations since we don't have an association Entity
on the client yet.
Filtering associations requires two steps:
- first, finding the latest association of each mlcube. Ideally this should live in an association
Entity
, but for now I think this should live in the comms layer, since the client doesn't actually care about the history of associations. Also, this logic is required for all places in the client code where this comms function is called. - Then, ordinary filtering by approval status. Currently this filtering is required in two places:
a.entity.list
command: which already applies filtering in the commands layer.
b.benchmark.get_models_uids
: which by definition should return approved associations model IDs. We can apply filtering inside this function.
I think this will become cleaner if we decide to create an association Entity
in our code base.
- added is_valid and user_metadata fields - PUT and DELETE now are only for admin
This reverts commit 152862a. and deactivates the endpoints
doing this requires its three mlcubes to be operation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Hasan! Just left a few comments/questions for the future. I skimmed over the server tests and implementation. Everything looks great and the tests look very well organized. Amazing amazing work as always Hasan!
assert benchmark.reference_model_mlcube in benchmark.models | ||
|
||
def test_benchmark_includes_additional_models_in_models(self, setup): | ||
def test_benchmark_get_models_works_as_expected(self, setup, expected_models): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note for the future: I'm not a big fan of the "works_as_expected" naming convention. We should find a way of shortly describing what's that expected behavior we're testing. No need to make any change now, but I think I've seen it a few times already and it's not as descriptive as a test should be
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a very valid point
@@ -1,3 +1,64 @@ | |||
# Server | |||
|
|||
Documentation TBD | |||
|
|||
## Writing Tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love that you added this! Thanks for the documentation Hasan!
@@ -20,7 +20,7 @@ class Benchmark(models.Model): | |||
description = models.CharField(max_length=100, blank=True) | |||
docs_url = models.CharField(max_length=100, blank=True) | |||
owner = models.ForeignKey(User, on_delete=models.PROTECT) | |||
demo_dataset_tarball_url = models.CharField(max_length=256, blank=True) | |||
demo_dataset_tarball_url = models.CharField(max_length=256) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember we talked about this before, but can you remind me how we're getting past situations where demo datasets can't be shared directly with a link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently we support using synapse storage (e..g, Verena used a private demo dataset)
we should eventually support things like dropbox or google drive
This PR adds tests for all server endpoints, focusing on permissions / disallowed actions.
TODO: update this. EDIT: done
closes #509