Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrader should reserve and release allocations if transient size is unknown #130

Open
wants to merge 2 commits into
base: feat/gc
Choose a base branch
from

Conversation

aarshkshah1992
Copy link
Contributor

@aarshkshah1992 aarshkshah1992 commented Aug 1, 2022

For #134 .

PR 1 for the Automated GC work.

Upgrader should reserve and release allocations if the transient size is unknown before downloading a file from a remote mount that does not support random access.

Copy link
Member

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial feedback.

func (u *Upgrader) refetch(ctx context.Context, into *os.File) error {
log.Debugw("actually refetching", "shard", u.key, "path", into.Name())
func (u *Upgrader) refetch(ctx context.Context, outpath string) error {
log.Debugw("actually refetching", "shard", u.key)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason why not to preserve the path here?

// Use this when automated GC is disabled.
type SimpleDownloader struct{}

func (s *SimpleDownloader) Download(ctx context.Context, underlying Mount, outpath string) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Download as a name implies that this is going to be fetched from the network. I think CopyLocal or CloneLocal might work best here as a more generic term that is consistent with scenarios like copying from a USB key into the local filesystem.

Comment on lines +44 to +49
// Reserve attempts to reserve `toReserve` bytes for the transient and returns the number
// of bytes actually reserved or any error encountered when trying to make the reservation.
// The `nPrevReservations` param denotes the number of previous successful reservations
// the caller has made for an ongoing download.
// Note: reserved != 0 if and only if err == nil.
Reserve(ctx context.Context, k shard.Key, nPrevReservations int64, toReserve int64) (reserved int64, err error)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This interface forces the caller to keep track of state that it shouldn't, concretely nPrevReservations. In fact, this is a bit leaky because not all allocators may need this information, and other allocators may require tracking other state.

I recommend changing the mechanics of this interface to be mediated by a Reservation object:

type TransientAllocator interface {
    ReservationFor(k shard.Key) Reservation
}

type Reservation interface {
    /// Reserves the specified number of bytes, returning the total number of bytes reserved.
    Reserve(ctx context.Context, required uint64) (current uint64, err error)

    /// Done releases the reservation, reporting the final size used to the allocator.
    /// Calling Reserve after calling Done will panic.
    Done(ctx context.Context, final uint64) error
}

Comment on lines +58 to +63
// TransientDownloader manages the process of downloading a transient locally from a Mount
// when we are upgrading a Mount that does NOT have random access capabilities.
type TransientDownloader interface {
// Download copies the transient from the given underlying Mount to the given output path.
Download(ctx context.Context, underlying Mount, outpath string) error
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be pluggable? What kinds of different downloader logic do you foresee here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants