Reorganize file store data model
From https://freexian-team.pages.debian.net/debusine/reference/devel-blueprints/dynamic-storage.html, we have the following data model changes that probably need to be made all in one go:
-
Move Workspace.default_file_store
andWorkspace.other_file_stores
toScope.file_stores
, andWorkspace.file_stores
toScope.upload_file_stores
andScope.download_file_stores
(ordered according to the corresponding priorities; see below). (!1589 (merged))- The default file store is given an upload priority of 100. Other file stores are left with unset priorities.
- The data migration fails if there are workspaces in a scope with different file stores; the administrator will have to resolve that manually.
- Move the
--default-file-store
option fromdebusine-admin manage_workspace
to a newdebusine-admin scope manage
command.
-
Add the following extra data on the relationship between Scope
andFileStore
, and extenddebusine-admin scope manage
to be able to change it: (!1589 (merged))-
upload_priority
(integer, optional): The priority of this store for the purpose of storing new files. When adding a new file, debusine tries stores whose policies allow adding new files in descending order of upload priority, counting null as the lowest. -
download_priority
(integer, optional): The priority of this store for the purpose of serving files to clients. When downloading a file, debusine tries stores in descending order of download priority, counting null as the lowest; it breaks ties in descending order of upload priority, again counting null as the lowest. If there is still a tie, it picks one of the possibilities arbitrarily. -
populate
(boolean, defaults to False): If True, the storage maintenance job ensures that this store has a copy of all files in the scope. -
drain
(boolean or string, defaults to False): If True, the storage maintenance job moves all files in this scope to some other store in the same scope, following the same rules for finding a target store as for uploads of new files. It does not move into a store if that would take its total size oversoft_max_size
(either for the scope or the file store), and it logs an error if it cannot find any eligible target store. -
drain_to
(string, optional): If this field is set, then constraindrain
to use the store with the given name in this scope. -
read_only
(boolean, defaults to False): If True, debusine will not add new files to this store. Use this in combination withdrain: True
to prepare for removing the file store. -
write_only
(boolean, defaults to False): If True, debusine will not read files from this store. This is suitable for provider storage classes that are designed for long-term archival rather than routine retrieval, such as S3 Glacier Deep Archive. -
soft_max_size
(integer, optional): An integer specifying the number of bytes that the file store can hold for this scope (accounting files that are in multiple scopes to all of the scopes in question). This limit may be exceeded temporarily during uploads; the storage maintenance job will move the least-recently-used files to another file store to get back below the limit.
-
-
In non-test code that reads file contents ( debusine.server.tar.TarArtifact
,debusine.server.tasks.package_upload.PackageUpload
,debusine.web.views.files.FileDownloadMixin
,debusine.web.views.files.FileWidget
,debusine.web.views.lintian.LintianView
), useScope.download_file_stores(file).first()
or equivalent rather thanScope.default_file_store
. (!1589 (merged)) -
Add a new instance_wide
boolean field toFileStore
. If True, this store can be used by any scope on this debusine instance. If False, it may only be used by a single scope (i.e. there is a unique constraint onScope
/FileStore
relations whereFileStore.instance_wide
is False). (!1622 (merged)) -
Add new soft_max_size
andmax_size
integer fields toFileStore
, specifying soft and hard limits respectively in bytes for the total capacity of the store. The soft limit may be exceeded temporarily during uploads; the storage maintenance job will move the least-recently-used files to another file store to get back below the limit. The hard limit may not be exceeded even temporarily during uploads. (!1624 (merged))
It isn't necessary to implement the details of policy
for now, nor the maximum size restrictions: just setting out the data model will be good enough for this issue. A later issue will deal with the details.
Report time in #539 (closed).
Edited by Colin Watson