diff --git a/.custom_wordlist.txt b/.custom_wordlist.txt index 5900b9d..d4a79e7 100644 --- a/.custom_wordlist.txt +++ b/.custom_wordlist.txt @@ -4,16 +4,19 @@ adapter's AddingLaunchpadCelebrity analyze analyzer +AnyPerson api AppleApplications appmanifest appserver +AppServerLayer appservers artifact artifacts aspirational AssertionsInLaunchpad attrgetter +AttributeError auditable auth authorization @@ -23,40 +26,57 @@ autocert autogenerated backend backends +backport +backported +backports backtrace Backtraces barfunc +BatchNavigator +batchnavigator Becca BEM +Bionic's Blazingly bool boolean breakpoint browserconfig BrowserNotificationMessages +brz +BugSubscription bugtracker BugWatch BugWatches bugzilla +buildable buildbot buildd +builddeb bzr bzr's -centralized +Canonical's +CanonicalUrlData Centralized +centralized cfg cgroup cgroups +changelog changeset changesets charmcraft Cmd CMDLINE +Cobol codebase Codehosting +codehosting codeimport codeimportscheduler CodeReviewChecklist +codereviewmessage +CodeReviewMessageView config configs ConfiguringWebApplications @@ -67,43 +87,63 @@ crontab crontabs cryptographic cryptographically +CSCVS css Ctrl +customisable +customisations customize CVE Dalia Dalia's +Danilo DatabaseSchemaChangesProcess DatabaseSetup +DateTimeJSONEncoder DatetimeUsageGuide DavidAllouche +DBEnums dbpatches dbschema dbupgrade ddl +debchange +debcommit +DEBEMAIL +DEBFULLNAME +debhelper +debian DebuggingWithGdb +debversion +DecoratedResultSet +DecoratedResultSets defense defense deps +Desc dev -directDelivery +dh +dia DirectDelivery -distro +directDelivery Distro -distroseries +distro +distros DistroSeries +distroseries distutils DNS DocFileSuite -docstring Docstring -docstrings +docstring Docstrings +docstrings doctest doctests docutils downstreams dtrt +dulwich EChangePolicy el else's @@ -122,6 +162,10 @@ FK flavor FooBar foofunc +foos +FooSet +ForbiddenAttribute +ForeignKey formatter formlib FreshLogs @@ -132,7 +176,13 @@ fsyncs ftest fti functiondef +ganesha gangotri +geoIP +geoip +getFeatureFlag +gettext +gina's github globals GPG @@ -140,64 +190,92 @@ gunicorn gzip HackingLazrLibraries hba +hg +hirsute's hostnames -HSTS howto +HSTS html http https https +IBranchMergeProposal +IBranchTarget +ICanonicalUrlData +ICodeReviewMessage +iframe iharness IMailDelivery IMailer importances +indexable InformationInfrastructure infos +init initialized +instantiation integrations io ip +IPerson IPv IPython +IRangeFactory irc IRCMeetings -javascript +iter Javadoc +javascript +JavascriptUnitTesting jenkaas jenkins jinja js JScript +JSONEncoder +keyring kiko kompare LandingChanges langpack LaunchpadAuthentication LaunchpadDatabaseRevision +LaunchpadFormView +LaunchpadObjectFactory LaunchpadPpa +LaunchpadProductionStatus +LaunchpadView lazr libera +libgit lifecycle listdir +ListRangeFactory LivePatching LiveScript logpoints logrotate lookup lookups +LOSA LOSAs +losas lp Lp's lpbuildbot +lpbuildd lpci LPHowTo lpnet +lpreview +lpsetup LTS lxc lxd LXD's macOS macquarie +MailingListSet Mantic ManualCdImageMirrorProber matchers @@ -206,21 +284,34 @@ matic maximize mbox mboxMailer +MemCache memcache milestoneoverlay +minified +minifies +minify minimize +mmm MockIo mockups mojo mozilla natively -né +NavigationMenu +NavigationMenus newsampledata +nfs +NPM +NTP +NULLs +né OAuth +OCI OEM oid ok ols +omg OOPSes OpenID opstats @@ -235,45 +326,69 @@ os OSA OWASP OWASP's -pagespeed PageSpeed +pagespeed pagetest -pagetests PageTests +pagetests pamola PatchSubmission pdb pentested performant -pgsql +pgbouncer pgSQL +pgsql pipx plaintext png po -PolicyandProcess +POExport +POExportRequest +POFile +POFiles +POFileTranslator PolicyAndProcess +PolicyandProcess PolicyForDocumentingCustomDistributions +poller +POMsgID pooler portlets -postgresql -PostgreSQL PostGreSQL +PostgreSQL +postgresql POSTs -ppa +POTemplate +POTemplateSharingSubset +POTExport +POTMsgSet +POTMsgSets +POTranslation +POTs PPA +ppa +PPAs PQM -psql +pqm pre +prejoining +prejoins PreMergeReviews +prepending +prepopulate +prepopulation preprocessed +pre_iter_hook prioritize prober proc +ProductSeries programmatically prometheus proname proxied +psql px py pydoctor @@ -282,20 +397,27 @@ PyPI qa QAProcess qastaging -queueddelivery -queuedDelivery +qemu +QueryCollector QueuedDelivery +queuedDelivery +queueddelivery quickref quickstart +rc rctp realfavicongenerator realizes ReleaseCycles +repl repo RESTful +resultset +ResultSets rocketfuel rollout rollouts +rootsite rosetta RPC rst @@ -303,30 +425,45 @@ rsync rsyncing runtime SafariWebContent +scalability screencast ScreenCasts sdist +SecurityProxy segfaulted sendmail SendmailMailer seqscan setUp +setupDTCBrowser +setupRosettaExpertBrowser setuptools +SFTP +simplejson simplestreams Slony +slony smtp -smtpMailer SMTPMailer +smtpMailer snapshotting SolutionsLog sourcecode sourceforge +SourcePackage sourcepackage +SourcePackagename sourcetree soyuz specialized specializes sql +SQL's +SQLBase +sqlbuilder +SQLObject +SQLObject's +SQLObjectResultSets SRE SREs srv @@ -335,20 +472,27 @@ SSO StagingServer standardized stdin +stepto steve stg StormMigrationGuide -stubmailer +StormRangeFactory +StormStatementRecorder stubMailer +stubmailer StyleGuides stylesheet subclassing subdirectory subprocess subproject +subselects +subvertpy sudo +SuggestivePOTemplate summarized svg +svn symlinked symlinks synchronize @@ -358,12 +502,18 @@ syntaxes systemd TableRenamePatch talisker +TAs TeamEmail templating testability testbed -testMailer +TestCase +TestingJavaScript +TestingWebServices TestMailer +testMailer +testr +testrepository testrunner testrunner's TestsFromChanges @@ -377,51 +527,79 @@ traceback tradeoff tradeoffs Transactional +TranslationGroup +TranslationImportQueue +TranslationMessage +TranslationTemplateItem traversers triaged triaging tsearch -ubuntu Ubuntu +ubuntu UCT UltimateVimPythonSetup unauthorized +uncheck Uncomment unittest unobvious +unproxied +unsuffixed untriaged untrusted upstreams +url +urls userbase +UtcDateTimeCol +validator +Validators +validators VBScript vbuilder +ViewTests +virt virtualenv virtualenvs +virtualized +VM +VMs VPN +VPOTExport +webapp webdav webhook webservice webservice's +webservices +wgrant's whitespace wildcherry Wishlist +WorkingWithDbDevel WorkingWithReviews worktrees WSGI +wsgi www Xenial +Xenial's XHR xml XML-RPC xmlrpc +xvfb YAGNI yaml yui yuilibrary +yuitest +yuixhr yy zcml ZFS -zope Zope +zope Zope's zz diff --git a/.sphinx/spellingcheck.yaml b/.sphinx/spellingcheck.yaml index d3879fc..fc160bf 100644 --- a/.sphinx/spellingcheck.yaml +++ b/.sphinx/spellingcheck.yaml @@ -9,7 +9,7 @@ matrix: - .custom_wordlist.txt output: .sphinx/.wordlist.dic sources: - - _build/**/*.html|!_build/explanation/engineering-overview-translations/index.html|!_build/explanation/testing/index.html|!_build/explanation/feature-flags/index.html|!_build/explanation/launchpad-ppa/index.html|!_build/explanation/branches/index.html|!_build/explanation/code/index.html|!_build/explanation/security-policy/index.html|!_build/explanation/database-performance/index.html|!_build/explanation/url-traversal/index.html|!_build/explanation/navigation-menus/index.html|!_build/explanation/storm-migration-guide/index.html|!_build/explanation/mail/index.html|!_build/explanation/javascript-buildsystem/index.html|!_build/explanation/javascript-integration-testing/index.html + - _build/**/*.html pipeline: - pyspelling.filters.html: comments: false diff --git a/custom_conf.py b/custom_conf.py index ea75eb7..696e7f6 100644 --- a/custom_conf.py +++ b/custom_conf.py @@ -129,14 +129,10 @@ '/Background', '/Concepts', # needs update '/HowToUseCodehostingLocally', # needs update - '/Loggerhead', # needs update 'Database/TableRenamePatch', # needs update 'Debugging#Profiling%20page%20requests', # needs update 'Debugging#Special%20URLs', # needs update 'JavascriptUnitTesting/MockIo', # needs update - 'Soyuz', # needs update - 'UI/CssSprites', # needs update - 'attachment:codehosting.png', # needs update 'https://git.launchpad.net/launchpad-mojo-specs/tree/mojo-lp-git/services', # private 'https://wiki.canonical.com/InformationInfrastructure/OSA/LaunchpadProductionStatus', # private 'https://wiki.canonical.com/Launchpad/PolicyandProcess/ProductionChange', # private @@ -145,6 +141,7 @@ 'irc.libera.chat', # this is not an HTTP link r'https://github\.com/canonical/fetch-service*', # private r'https://github\.com/canonical/fetch-operator*', # private + 'https://git.launchpad.net/charm-launchpad-buildd-image-modifier/tree/files/scripts/setup-ppa-buildd', # private ] # Pages on which to ignore anchors diff --git a/explanation/branches.rst b/explanation/branches.rst index 9d1323a..360b19c 100644 --- a/explanation/branches.rst +++ b/explanation/branches.rst @@ -54,9 +54,9 @@ It is also possible to submit directly to the **db-devel** branch. Let's Try That in Words ----------------------- -Database changes can be destabilizing to other work, so we isolate them +Database changes can be destabilising to other work, so we isolate them out into a separate branch (**db-devel**). Then there are two arenas for -stabilizing changes for deployment: **stable** (which ends up on +stabilising changes for deployment: **stable** (which ends up on `qastaging `__ and is fed from the **master** branch), and **db-stable** (which ends up on `staging `__ and is fed from the @@ -79,7 +79,7 @@ In summary: with **db-devel**, sent as if it came from the Launchpad list. The Launchpad list will be informed of merge failures, and Launchpad developers will collectively be responsible for correcting them. - (***TODO: is this some internal list? Hmmm.***) + (***TODO: is this some internal list?***) - Staging runs **db-stable**; qastaging runs **stable**. We will deploy production DB schema changes from **db-stable**. (After a deployment, @@ -155,8 +155,8 @@ Now create and land a merge proposal. FAQ --- -Can I land a testfix before buildbot has finished a test run that has failed or will fail? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Can I land a test fix before buildbot has finished a test run that has failed or will fail? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Yes you can, and please do if appropriate, because this will mean that other developers will not encounter a broken tree at all. diff --git a/explanation/code.rst b/explanation/code.rst index d625a4f..4ef6890 100644 --- a/explanation/code.rst +++ b/explanation/code.rst @@ -13,7 +13,7 @@ system. The major sub-systems are: - The `git` and `bzr` / `brz` clients (neither of which is part of Launchpad, but their behaviours are important to us) -- Connectivity to Launchpad (git, git+ssh, and https for Git; sftp and +- Connectivity to Launchpad (git, git+ssh, and https for Git; SFTP and bzr+ssh for Bazaar) - Hosting infrastructure - The underlying object model @@ -21,15 +21,15 @@ system. The major sub-systems are: - Email processing - Code imports (from CVS, Subversion, git and Mercurial) - Branch source code browser (`cgit `__ - for Git; `loggerhead `__ for Bazaar) + for Git; :doc:`loggerhead <../how-to/land-update-for-loggerhead>` for Bazaar) - Source package recipes (`git-build-recipe`/`bzr-builder\` integration - with `Soyuz `__) + with :doc:`Soyuz ../how-to/use-soyuz-locally`) Each of these subsystems also have multiple moving parts and some have other asynchronous jobs associated with them. -The `codehosting overview diagram `__ -summarizes how some of these systems interact. +The `codehosting overview diagram :attachment:../images/codehosting.png` +summarises how some of these systems interact. You can `run the codehosting system locally `__. @@ -86,14 +86,14 @@ Apache handles the HTTP routing using a number of mod-rewrite rules. '''Parts [and responsibilities] ''' -- HTTP apache configuration [shared with LOSAs] +- HTTP Apache configuration [shared with LOSAs] - branch location rewrite script (called by mod-rewrite rule) - ssh server - - authentication - - sftp implementation + - SFTP implementation - smart server launching - smart server @@ -182,7 +182,7 @@ into Git repositories in Launchpad. - - - cscvs for CVS (and legacy Subversion imports) + - CSCVS for CVS (and legacy Subversion imports) - bzr-svn and subvertpy for all new Subversion imports - bzr-git and dulwich for git - bzr-hg for mercurial imports @@ -192,7 +192,7 @@ Git repository source code browser (cgit) Launchpad uses `cgit `__ to provide a web view of the repository contents. We use an unmodified package of -\`cgit`; Launchpad's customizations are in +\`cgit`; Launchpad's customisations are in `turnip.pack.http `__. Bazaar branch source code browser (loggerhead) @@ -208,7 +208,7 @@ branch. - loggerhead itself - community project but with major contributions from Canonical -See `Loggerhead for Launchpad developers `__ for details on +See :doc:`Loggerhead for Launchpad developers <../how-to/land-update-for-loggerhead>` for details on how to land changes to Launchpad loggerhead. Source package recipes diff --git a/explanation/css.rst b/explanation/css.rst index c0e31e9..2e17e72 100644 --- a/explanation/css.rst +++ b/explanation/css.rst @@ -39,7 +39,7 @@ If you are dealing with sprites you may also have to run: make sprite_image -`More info on sprites `__ +:doc:`More info on sprites `. Fonts ----- diff --git a/explanation/database-performance.rst b/explanation/database-performance.rst index af5c204..fb66b14 100644 --- a/explanation/database-performance.rst +++ b/explanation/database-performance.rst @@ -88,7 +88,7 @@ can be used to do this in combination with a Be sure to clear these caches with a Storm invalidation hook, to avoid test suite fallout. Objects are not reused between requests on the -appservers, so we're generally safe there. (Our storm and sqlbase +appservers, so we're generally safe there. (Our storm and SQLBase classes within the Launchpad tree have these hooks, so you only need to manually invalidate if you are using storm directly). @@ -137,7 +137,7 @@ one of these tools. - StormStatementRecorder, LP_DEBUG_SQL=1, LP_DEBUG_SQL_EXTRA=1, - QueryCollector. In extremis you can also turn on statement logging in + QueryCollector. In extremes you can also turn on statement logging in postgresql. [Note: please add more detail if you are reading this and have the time and knowledge.] - Raise an exception at a convenient point, to cause a real OOPS. @@ -146,7 +146,7 @@ Efficient batching of SQL result sets: StormRangeFactory -------------------------------------------------------- Batched result sets are rendered via the class -canonical.launchpad.webapp.bachting.BatchNavigator. (This class is a +canonical.launchpad.webapp.batching.BatchNavigator. (This class is a thin wrapper around lazr.batchnavigator.BatchNavigator.) BatchNavigator delegates the retrieval of batches from a result set to diff --git a/explanation/engineering-overview-translations.rst b/explanation/engineering-overview-translations.rst index 85c63fa..1bd7932 100644 --- a/explanation/engineering-overview-translations.rst +++ b/explanation/engineering-overview-translations.rst @@ -28,7 +28,7 @@ Launchpad: the *ubuntu side* and the *upstream side.* Where possible, the two sides are unified (in technical terms) and integrated (in collaboration terms). But you'll see a lot of cases where they are treated somewhat differently. Permissions can differ, -organizational structures differ, and some processes only exist on one +organisational structures differ, and some processes only exist on one side or the other. At the most fundamental level, the two sides are integrated through: @@ -153,7 +153,7 @@ or select a different translation message. A translation message can be *current* in a given PO file, or not. It's an emergent property of more complex shared data structures. So you can -view a PO file as a customizable “view” on the current translations of a +view a PO file as a customisable “view” on the current translations of a particular template into a given language. :: @@ -301,7 +301,7 @@ Soyuz uploads are different in that regard: all its custom logic is built into the gardener because the two developed hand in hand. Mainly for this reason, the gardener's approval logic is fiendishly complex. -Permissions and organization +Permissions and organisation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Message sharing @@ -332,7 +332,7 @@ In a nutshell: - A \`POTranslation\` holds the text of a translated string; \`TranslationMessage\` refers to it (once for every plural form in the language). -- A \`TranslationGroup\` is an organizational structure for managing +- A \`TranslationGroup\` is an organisational structure for managing translations. - A \`Translator\` is an entry in a \`TranslationGroup\` saying who is responsible for a particular \`Language`. @@ -352,7 +352,7 @@ multiple templates. We then call these templates *sharing templates.* And that means that a translation message to, say, Italian will be available in each of those templates' PO file for Italian. -This is where it gets complicated; please fasten your seatbelts and +This is where it gets complicated; please fasten your seat belts and extinguish smoking motherboards. A translation message can be in one of three sharing states: @@ -394,7 +394,7 @@ current translation message? Look for one with: - diverged to your template or, if no message matches, not diverged at all. -(On a sidenote, this is why “simple” translation statistics can be quite +(On a side note, this is why “simple” translation statistics can be quite hard to compute.) Which templates share? @@ -518,7 +518,7 @@ translation page. Its complexity also makes the SQL logs hard to follow. A large part of this query (in terms of SQL text) was involved in finding out what templates were eligible for taking suggestions from. This part was also completely repetitive, and it doesn't even need to be -immediately consistent, so we materialized it as a simple cache table +immediately consistent, so we materialised it as a simple cache table called \`SuggestivePOTemplate`. We refresh this cache all the time by clearing out the table and diff --git a/explanation/feature-flags.rst b/explanation/feature-flags.rst index 8eae069..a0ad8cb 100644 --- a/explanation/feature-flags.rst +++ b/explanation/feature-flags.rst @@ -3,7 +3,7 @@ Feature Flags .. include:: ../includes/important_not_revised.rst -**FeatureFlags allow Launchpad's configuration to be changed while it's +**Feature Flags allow Launchpad's configuration to be changed while it's running, and for particular features or behaviours to be exposed to only a subset of users or requests.** @@ -23,15 +23,15 @@ Scenarios - Dark launches (aka embargoes: land code first, turn it on later) - Closed betas -- Scram switches (eg "omg daily builds are killing us, make it stop") +- Scram switches (e.g. "omg daily builds are killing us, make it stop") - Soft/slow launch (let just a few users use it and see what happens) - Site-wide notification - Show an 'alpha', 'beta' or 'new!' badge next to a UI control, then later turn it off without a new rollout -- Show developer-oriented UI only to developers (eg the query count) +- Show developer-oriented UI only to developers (e.g. the query count) - Control page timeouts (or other resource limits) either per page id, or per user group -- Set resource limits (eg address space cap) for jobs. +- Set resource limits (e.g. address space cap) for jobs. Concepts -------- @@ -40,7 +40,7 @@ A **feature flag** has a string name, and has a dynamically-determined value within a particular context such as a web or api request. The value in that context depends on determining which **scopes** are relevant to the context, and what **rules** exist for that flag and -scopes. The rules are totally ordered and the highest-prority rule +scopes. The rules are totally ordered and the highest-priority rule determines the relevant value. Flags values are strings; or if no value is specified, \`None`. (If an @@ -128,7 +128,7 @@ Flags should be named as where each of the parts is a legal Python name (so use underscores to join words, not dashes.) -The **area** is the general area of Launchpad this relates to: eg +The **area** is the general area of Launchpad this relates to: e.g. 'code', 'librarian', ... The **feature** is the particular thing that's being controlled, such as @@ -218,7 +218,7 @@ Adding and documenting a new feature flag ----------------------------------------- If you introduce a new feature flag, as well as reading it from -whereever is useful, you should also: +wherever is useful, you should also: - Add a section in lib/lp/services/features/flags.py flag_info describing the flag, including documentation that will make sense to @@ -236,7 +236,7 @@ whereever is useful, you should also: ''), The last item in that list is descriptive, not prescriptive: it -*documents the code's default behavior* if no value is specified. The +*documents the code's default behaviour* if no value is specified. The flag's value will still read as None if no value is specified, and setting it to an empty value still returns the empty string. @@ -267,7 +267,7 @@ and/or SCRIPT_SCOPE_HANDLERS -depending on whether it applies to webapp requests, scripts, or both). +depending on whether it applies to web app requests, scripts, or both). Testing ------- diff --git a/explanation/javascript-buildsystem.rst b/explanation/javascript-buildsystem.rst index c01fd39..8d7b025 100644 --- a/explanation/javascript-buildsystem.rst +++ b/explanation/javascript-buildsystem.rst @@ -34,8 +34,7 @@ Adding a third-party widget ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The current story for adding a third-party widget is to put it in -``lib/lp/contrib``. You can read the mailing list discussion ( -https://lists.launchpad.net/launchpad-dev/msg06474.html ) about the adoption of +``lib/lp/contrib``. You can read the `mailing list discussion`_ about the adoption of this location. For CSS, follow the rules above to modify the tools. If other assets are @@ -44,6 +43,8 @@ needed, you'll need to create a link in ``lib/lp/contrib`` so the assets can be found. See ``lib/canonical/launchpad/icing/yui3-gallery`` for an example. +.. _`mailing list discussion`: https://lists.launchpad.net/launchpad-dev/msg06474.html + New Combo loader Setup ---------------------- @@ -54,7 +55,7 @@ minified into a build directory ``build/js/``. Files are served out of the ``build/js`` directory based on the YUI combo loader config that is constructed in the ``lib/lp/app/templates/base-layout-macros.pt``. These are combined and -served out via the convoy wsgi application through Apache. +served out via the convoy WSGI application through Apache. Developing Javascript ~~~~~~~~~~~~~~~~~~~~~ @@ -85,7 +86,7 @@ include that module name in any YUI block. LPJS.use('modulename', function (Y)... The combo loader will serve your new module when you reload the page -ith that content on it. +with that content on it. Launchpad CSS ------------- diff --git a/explanation/javascript-integration-testing.rst b/explanation/javascript-integration-testing.rst index fe65f67..df2b488 100644 --- a/explanation/javascript-integration-testing.rst +++ b/explanation/javascript-integration-testing.rst @@ -1,18 +1,18 @@ Integration Testing in JavaScript ================================= -Launchpad's JavaScript testing is built around YUI 3's yuitest library. -We use the GradedBrowserSupport chart to determine which browsers code should +Launchpad's JavaScript testing is built around YUI 3's ``yuitest`` library. +We use the Graded Browser Support chart to determine which browser's code should be regularly tested in. Every JavaScript component should be tested first and foremost using :doc:`unit testing `. -We have infrastructure to write tests centered on the integration +We have infrastructure to write tests centred on the integration between the JavaScript component and the app server (regular API or view/++model++ page api.) -These are still written using the \`yuitest\` library, but they are +These are still written using the ``yuitest`` library, but they are loaded and can access a "real" appserver (the one started by the AppServerLayer). @@ -32,7 +32,7 @@ Creating the tests - The ``.js`` file contains the tests using the standard ``yuitest`` library. - The ``.py`` file contains fixtures that will operate within the app server. They should create content through the standard LaunchpadObjectFactory that - will be accessed by the test through. The database is automatically reset + will be accessed by the test. The database is automatically reset after every test. Running the tests diff --git a/explanation/launchpad-ppa.rst b/explanation/launchpad-ppa.rst index f51605b..a077fc0 100644 --- a/explanation/launchpad-ppa.rst +++ b/explanation/launchpad-ppa.rst @@ -48,7 +48,7 @@ Policy/procedure for updates: version number, so if debchange -i adds that for you, take it out again and increment the unsuffixed version number instead. 5. debcommit or bzr commit -6. Exercise personal judgment on whether your change merits a merge +6. Exercise personal judgement on whether your change merits a merge proposal, or is sufficiently trivial to just be committed directly. 7. If preparing a merge proposal, please ensure your branch for review contains a complete debian/changelog entry ready for release. @@ -94,13 +94,13 @@ mmm-archive-manager Backported or patched Ubuntu packages ------------------------------------- -postgresql-10, postgresql-common, postgresql-debversion, slony1-2 (trusty, xenial) +postgresql-10, postgresql-common, postgresql-debversion, slony1-2 (Trusty, Xenial) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Straight backports of PostgreSQL 10 and paraphernalia from bionic to -trusty. bionic's version is fine. +Straight backports of PostgreSQL 10 and paraphernalia from Bionic to +Trusty. Bionic's version is fine. -pgbouncer (trusty) +pgbouncer (Trusty) ~~~~~~~~~~~~~~~~~~ Trusty's pgbouncer with wgrant's ENABLE/DISABLE patch as required by @@ -108,12 +108,12 @@ full-update.py. For the benefit of launchpad-dependencies, the patched package additionally Provides pgbouncer-with-disconnect. The ENABLE/DISABLE patch is included upstream in pgbouncer 1.6, so -xenial's version is fine. +Xenial's version is fine. -libgit2, git (bionic) +libgit2, git (Bionic) ~~~~~~~~~~~~~~~~~~~~~ -Various updates backported from focal for use on git.launchpad.net. +Various updates backported from Focal for use on git.launchpad.net. convoy ~~~~~~ @@ -125,7 +125,7 @@ The packaging is based on but modified to install Launchpad's convoy.wsgi. Ubuntu's modern packaging uses dh-python and supports Python 3. -debian-archive-keyring (xenial, bionic, focal) +debian-archive-keyring (Xenial, Bionic, Focal) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Straight backport for new Debian archive keys for gina's mirror. @@ -136,7 +136,7 @@ git-build-recipe `Daily builds of lp:git-build-recipe `__ -for buildds. +for builds. Distro series support --------------------- @@ -144,14 +144,14 @@ Distro series support Stable ~~~~~~ -- trusty (obsolete production LTS, still used by databases) -- xenial (current production LTS) -- bionic (next production LTS) +- Trusty (obsolete production LTS, still used by databases) +- Xenial (current production LTS) +- Bionic (next production LTS) In progress ~~~~~~~~~~~ -- focal (next, next production LTS?) +- Focal (next, next production LTS?) When the supported series change, remember to also update :doc:`../how-to/getting` and :doc:`../how-to/running`. diff --git a/explanation/mail.rst b/explanation/mail.rst index 44acbe3..ece5761 100644 --- a/explanation/mail.rst +++ b/explanation/mail.rst @@ -6,14 +6,14 @@ Launchpad Mail There are various kinds of emails in Launchpad: 1. Mailing lists (represented by Launchpad teams). A mailing list has an - address \`TEAM_NAME@lists.launchpad.net`, archives - (`https://lists.launchpad.net/TEAM_NAME`), and an administrative - interface (`https://lists.canonical.com/mailman/admin/TEAM_NAME`). + address ``TEAM_NAME@lists.launchpad.net``, archives + (``https://lists.launchpad.net/TEAM_NAME``), and an administrative + interface (``https://lists.canonical.com/mailman/admin/TEAM_NAME``). Launchpad uses `Mailman `__ to process these kinds of mails. 2. Emails sent from one user to another (that is, an email sent "by" Launchpad, but really sent by user Alice when Alice uses the - \`https://edge.launchpad.net/~barry/+contactuser\` form to contact + ``https://edge.launchpad.net/~barry/+contactuser`` form to contact user Barry. 3. Emails sent by Launchpad itself, such as emails sent to subscribers when a bug is changed. diff --git a/explanation/navigation-menus.rst b/explanation/navigation-menus.rst index 1bd2948..775fbc4 100644 --- a/explanation/navigation-menus.rst +++ b/explanation/navigation-menus.rst @@ -4,9 +4,9 @@ Navigation menus .. include:: ../includes/important_not_revised.rst When linking different views in Launchpad page templates it is recommend -to use the !NavigationMenu attached to each facet of that object. +to use the NavigationMenu attached to each facet of that object. -The !NavigationMenus are defined in the *browser* code. +The NavigationMenus are defined in the *browser* code. An object can have multiple *facets*. For example IPerson has a 'code', 'overview', 'translation' .. etc facets. @@ -37,7 +37,7 @@ An will only return the text without the anchor tag. -From withing a page template, you can use the following TAL expresion to +From withing a page template, you can use the following TAL expression to generate a link: :: diff --git a/explanation/security-policy.rst b/explanation/security-policy.rst index a23472a..1906f92 100644 --- a/explanation/security-policy.rst +++ b/explanation/security-policy.rst @@ -7,7 +7,7 @@ Launchpad uses "permission" to control access to views, object attributes and object methods. Permission are granted based on the context object type (its interface) -by an ``IAuthorization`` adapters. Traditionally these adapters have +by an ``IAuthorization`` adaptors. Traditionally these adaptors have all been defined in the ``canonical.launchpad.security`` module, but they are being moved out in the ``security.py`` module of the specific application. @@ -76,7 +76,7 @@ permission is assigned to a given attribute, attempting to access it is **forbidden**. If there is a permission assigned to it, and the current user does not have that permission, attempting it is **unauthorized**. If the current user has the correct permission, then that attribute will -behave almost exactly the same as it would on an un-proxied object. The +behave almost exactly the same as it would on an unproxied object. The main difference is that any return values may be wrapped in a SecurityProxy as well. diff --git a/explanation/storm-migration-guide.rst b/explanation/storm-migration-guide.rst index cb2e5f0..e61863c 100644 --- a/explanation/storm-migration-guide.rst +++ b/explanation/storm-migration-guide.rst @@ -6,7 +6,7 @@ Storm Migration Guide This guide explains how certain SQLObject concepts map to equivalent Storm concepts. It expects a level of familiarity in how SQLObject works (or at least how it is used in Launchpad). It is not a full tutorial on -how to use Storm either – see https://storm.canonical.com/Tutorial for +how to use Storm either - see https://storm.canonical.com/Tutorial for that. Differences @@ -30,8 +30,8 @@ can be used to refer to objects in multiple databases (or to objects in the same database over different DB connections, as you might want to do in tests). -There are two main ways to access the main store. One is explicitely via -the \`IStoreSelector\` utility: +There are two main ways to access the main store. One is explicitly via +the ``IStoreSelector`` utility: :: @@ -48,7 +48,7 @@ Use the master flavor if you need to update the objects. Use the slave flavor to offload a search to a replica database and don't mind the search being made on data a few seconds out of date. Use the default flavor if you don't need to make changes, but need an up to date copy of -the database (eg. most views, as the object you are viewing might just +the database (e.g. most views, as the object you are viewing might just have been created) - Launchpad will choose an appropriate flavor. The other method is from an existing object: @@ -62,7 +62,7 @@ The other method is from an existing object: The second form is often more convenient, and is preferred if you don't need to make updates and want them to play nicely with objects from an -unknown store (eg. passed in via your method parameters). +unknown store (e.g. passed in via your method parameters). Utility methods and Stores ~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -77,9 +77,7 @@ uses the master store: - If you are doing a POST (which means your overall operation may write) - If you are doing a GET, but have recently written (which means the - slaves may - -``not have your latest changes).`` + slaves may not have your latest changes). So the only times you'll run into trouble are if: @@ -87,9 +85,7 @@ So the only times you'll run into trouble are if: - a GET operation relies on data that was written to the database by another GET - a GET operation relies on data that was written to the database by - another browser - -``instance.`` + another browser instance. We plan to address these issues better once we're using Python 2.5 and its support for **with** statements / context management. @@ -152,9 +148,9 @@ takes the class and the primary key of the object as arguments: Querying Objects ~~~~~~~~~~~~~~~~ -The equivalent of SQLObject's \`select`, \`selectBy`, \`selectOne`, -\`selectOneBy`, \`selectFirst\` and \`selectFirstBy\` methods is -\`Store.find()`. It acts quite similar to the equivalent SQLObject +The equivalent of SQLObject's ``select``, ``selectBy``, ``selectOne``, +``selectOneBy``, ``selectFirst`` and ``selectFirstBy`` methods is +``Store.find()``. It acts quite similar to the equivalent SQLObject methods, and the following are equivalent: :: @@ -167,12 +163,12 @@ methods, and the following are equivalent: Note that the "`.q.`" bit is not required in the second example. The first two versions are preferred to direct SQL since they allow Storm to determine which tables are being used in the query automatically. As -with SQLObject, no query is issued when executing \`find()`: that is +with SQLObject, no query is issued when executing ``find()``: that is delayed until you try to access the result set. -The behaviour of \`selectOne\` and \`selectFirst\` are covered by the -\`one\` and \`first\` methods on the result set. You can chain them with -the \`find\` call if it is appropriate: +The behaviour of ``selectOne`` and ``selectFirst`` are covered by the +``one`` and ``first`` methods on the result set. You can chain them with +the ``find`` call if it is appropriate: :: @@ -199,9 +195,9 @@ Unlike SQLObject, the ordering is applied to the result set rather than creating another one. The method does return the result set though, to make it possible to chain the calls when constructing a result set. Similar to SQLObject, a table can specify the default ordering for -results with the \`__storm_order__\` class attribute. +results with the ``__storm_order__`` class attribute. -See the \`storm.store.ResultSet\` doc strings and the Storm tutorial for +See the ``storm.store.ResultSet`` doc strings and the Storm tutorial for more details on what is possible. Defining Tables @@ -210,12 +206,12 @@ Defining Tables Some of the primary differences between SQLObject and Storm database class definitions are: -- Subclass from \`lp.services.database.stormbase.StormBase\` instead of - \`lp.services.database.sqlbase.SQLBase`. (Subclassing - \`storm.base.Storm\` also works in most cases, but \`StormBase\` adds - a \`storm_invalidate\` hook for cached properties.) -- Use the \`__storm_table__\` attribute to set the table name instead - of \`_table`. +- Subclass from ``lp.services.database.stormbase.StormBase`` instead of + ``lp.services.database.sqlbase.SQLBase``. (Subclassing + ``storm.base.Storm`` also works in most cases, but ``StormBase`` adds + a ``storm_invalidate`` hook for cached properties.) +- Use the ``__storm_table__`` attribute to set the table name instead + of ``_table``. - The primary key must be defined explicitly. This will usually look like: @@ -229,35 +225,35 @@ class definitions are: id = Int(primary=True) - The class should have a constructor if appropriate (some classes like - \`BugSubscription\` may not need one). Note that the constructor + ``BugSubscription`` may not need one). Note that the constructor should not usually add the object to a store -- leave that for a - \`FooSet.new()\` method, or let it be inferred by a relation. - **BarryWarsaw: what if there is no \`FooSet\` or relation? See + ``FooSet.new()`` method, or let it be inferred by a relation. + **Barry Warsaw: what if there is no ``FooSet`` or relation? See question below.** - Default result set ordering should be set using the - \`__storm_order__\` property rather than \`_defaultOrder`. + ``__storm_order__`` property rather than ``_defaultOrder``. - Use the column definition classes are found in \`storm.properties`, - and do not use the \`Col\` suffix. In general, they will follow + and do not use the ``Col`` suffix. In general, they will follow Python's type naming conventions rather than SQL's (e.g. TimeDelta rather than Interval). -- There is no equivalent of \`alternateID=True`. The \`Store.find()\` - method provides equivalent functionality to the \`byColumnName\` +- There is no equivalent of ``alternateID=True``. The ``Store.find()`` + method provides equivalent functionality to the ``byColumnName`` methods generated by this argument. - To specify that a column can not contain NULLs, use - \`allow_none=False\` rather than \`notNull=True`. Note that if NULLs - are found in such columns, \`NoneError\` will be raised. -- If no \`default\` is specified for a column, the database default - will be used. So \`default=DEFAULT\` or similar can be removed. -- Be sure your table has a \`PRIMARY KEY\` constraint defined, - otherwise your \`id\` column will not get set automatically and you - will get an \`IntegrityError\` from PostgreSQL. + ``allow_none=False`` rather than ``notNull=True``. Note that if NULLs + are found in such columns, ``NoneError`` will be raised. +- If no ``default`` is specified for a column, the database default + will be used. So ``default=DEFAULT`` or similar can be removed. +- Be sure your table has a ``PRIMARY KEY`` constraint defined, + otherwise your ``id`` column will not get set automatically and you + will get an ``IntegrityError`` from PostgreSQL. Foreign Key References ^^^^^^^^^^^^^^^^^^^^^^ -The equivalent of SQLObject's \`ForeignKey\` class is \`Reference`. A -Storm \`Reference\` property creates a relationship between a local -column and a remote column. Unlike \`ForeignKey`, it does not implicitly +The equivalent of SQLObject's ``ForeignKey`` class is ``Reference``. A +Storm ``Reference`` property creates a relationship between a local +column and a remote column. Unlike ``ForeignKey``, it does not implicitly create the FK column. So the following definitions are equivalent: :: @@ -273,8 +269,8 @@ create the FK column. So the following definitions are equivalent: The columns can be passed directly to Reference(), or can be passed as strings that are looked up on first use. -The \`Reference\` class is also used to replace SQLObject's -\`SingleJoin\` class: +The ``Reference`` class is also used to replace SQLObject's +``SingleJoin`` class: :: @@ -288,8 +284,8 @@ The \`Reference\` class is also used to replace SQLObject's Reference Sets ^^^^^^^^^^^^^^ -The \`SQLMultipleJoin\` and \`SQLRelatedJoin\` classes are replaced by -Storm's \`ReferenceSet`: +The ``SQLMultipleJoin`` and ``SQLRelatedJoin`` classes are replaced by +Storm's ``ReferenceSet``: :: @@ -307,16 +303,16 @@ Storm's \`ReferenceSet`: order_by=Person.name) While the SQLObject properties return plain result sets, the Storm -properties return \`BoundReferenceSet\` objects. Some differences +properties return ``BoundReferenceSet`` objects. Some differences include: -- \`add(obj)\` and \`remove(obj)\` methods are provided for adding and +- ``add(obj)`` and ``remove(obj)`` methods are provided for adding and removing objects from the set. These are roughly equivalent to the - automatic \`addFoo()\` and \`removeFoo()\` methods that SQLObject + automatic ``addFoo()`` and ``removeFoo()`` methods that SQLObject generates. For reference sets that join through a third table, Storm will take care of inserting and deleting rows as needed. -- A \`find()\` method is provided for searching for objects within the - reference set. This behaves a lot like \`Store.find()\` without the +- A ``find()`` method is provided for searching for objects within the + reference set. This behaves a lot like ``Store.find()`` without the first argument. Property Setters / Validators @@ -324,20 +320,20 @@ Property Setters / Validators SQLObject provided two ways of controlling how variables were set: -1. magic \`_set_columnName()\` methods. +1. magic ``_set_columnName()`` methods. 2. the validator argument on column definitions. Storm does not support magic methods but does have validators (albeit in a simpler form than SQLObject). A validator is a function that takes -\`(object, attr_name, new_value)\` as arguments and returns the value +``(object, attr_name, new_value)`` as arguments and returns the value that should be set. This allows validation to be performed on the new value (by raising an exception on bad values), and transformation of the -value if appropriate (by returning something other than \`new_value`). +value if appropriate (by returning something other than ``new_value``). -A validator can be set for a column with the \`validator\` argument in +A validator can be set for a column with the ``validator`` argument in the column definition. -You may notice some uses of \`storm_validator\` in code using the +You may notice some uses of ``storm_validator`` in code using the compatibility layer. As the compatibility layer does not implement the either of the SQLObject validation APIs, this was done to allow use of Storm validators without completely rewriting the definitions. @@ -346,7 +342,7 @@ Prejoins ^^^^^^^^ Storm's equivalent of prejoins is tuple finds. To select all products -that are part of \`launchpad-project\` and their owners, we can do: +that are part of ``launchpad-project`` and their owners, we can do: :: @@ -389,14 +385,14 @@ This result set will return (product, owner, driver) tuples. Direct SQL Queries ~~~~~~~~~~~~~~~~~~ -To perform direct SQL queries, we previously used the \`cursor()\` -function from \`lp.services.database.sqlbase\` to get a cursor on the +To perform direct SQL queries, we previously used the ``cursor()`` +function from ``lp.services.database.sqlbase`` to get a cursor on the connection being used by SQLObject. These uses should be converted to -use \`Store.execute()`, which will make sure pending changes have been +use ``Store.execute()``, which will make sure pending changes have been flushed to the database first in order to stay consistent. -This method returns a result object with \`get_one\` and \`get_all\` -methods that act like a cursor's \`fetchone\` and \`fetchall\` methods. +This method returns a result object with ``get_one`` and ``get_all`` +methods that act like a cursor's ``fetchone`` and ``fetchall`` methods. It also supports iteration. :: @@ -412,18 +408,18 @@ A good order to migrate code is: 1. Convert column properties to use the Storm syntax. This should be a no-op change, and not affect external code. -2. Convert \`ForeignKey()\` definitions to an appropriate pair of - \`Int()\` and \`Reference()\` definitions. -3. Convert \`sync()`, \`syncUpdate()`, \`destroySelf()`, etc calls to +2. Convert ``ForeignKey()`` definitions to an appropriate pair of + ``Int()`` and ``Reference()`` definitions. +3. Convert ``sync()``, ``syncUpdate()``, ``destroySelf()``, etc calls to Storm equivalents. -4. Convert uses of \`Class.select*()\` to use \`find()`. Note that you +4. Convert uses of ``Class.select*()`` to use ``find()``. Note that you lose prejoins support here, so use tuple finds as appropriate. Change queries to use Storm expressions rather than sqlbuilder expressions. -5. Convert \`SQLMultipleJoin\` and \`SQLRelatedJoin\` to - \`ReferenceSet()`. As this changes the API of the class a bit, it +5. Convert ``SQLMultipleJoin`` and ``SQLRelatedJoin`` to + ``ReferenceSet()``. As this changes the API of the class a bit, it will probably require changes external to the class. 6. Change the class to derive from - \`lp.services.database.stormbase.StormBase\` instead of \`SQLBase`. + ``lp.services.database.stormbase.StormBase`` instead of ``SQLBase``. This list is roughly ordered based on the locality of changes and based on dependencies between changes. @@ -532,45 +528,41 @@ Questions 12-Aug-2008 -- Some of our ForeignKey columns had notNull=True but Storm's Reference - class - -``does not accept allow_none=False keyword argument.`` +- Some of our ForeignKey columns had ``notNull=True`` but Storm's Reference + class does not accept ``allow_none=False`` keyword argument. -- - - - Put the \`allow_none=False\` on the \`Int\` rather than on the - \`Reference`. + - Put the ``allow_none=False`` on the ``Int`` rather than on the + ``Reference``. .. raw:: html - How to actually convert a UtcDateTimeCol to a DateTime? For now, I'm - using + using a DateTime with ``tzinfo=pytz.timezone('UTC')`` keyword argument. + Also, does ``default=UTC_NOW`` still work? -| ``a DateTime with tzinfo=pytz.timezone('UTC') keyword arg.  Also, does`` -| ``default=UTC_NOW still work?`` + - Use ``default_factory=datetime.utcnow`` instead. -:literal:`bigjools: use `default_factory=datetime.utcnow` instead.` +.. raw:: html -- Can I still use EnumCol, or is there a better way to hook up with our + -``DBEnums?`` +- Can I still use EnumCol, or is there a better way to hook up with our DBEnums? -- + - Try ``lp.services.database.enumcol.DBEnum``. - - Try lp.services.database.enumcol.DBEnum. +.. raw:: html + + 03-Oct-2008 -- I'm still confused about the right way to add an object to a store. - If I'm - -| ``using native Storm APIs (as all new code should, right?) should I add a`` -| :literal:`Store.add() call my database object's `__init__()`?  That seems to be the` -| ``most straightforward translation of the SQLObject compatibility layer.  And`` -| ``if the answer is "yes", then how do I get the Store to use?  I could use`` -| :literal:`\`Store.of(someobj).add(self)` but `someobj` might not be in the right store.` -| :literal:`I could use the `getUtility()` trick, but it seems wrong that a database` -| :literal:`module should be importing an interface from `webapp`.` +- I'm still confused about the right way to add an object to a store. + If I'm using native Storm APIs (as all new code should, right?) should + I add a ``Store.add()`` call my database object's ``__init__()``? + That seems to be the most straightforward translation of the SQLObject compatibility layer. + And if the answer is "yes", then how do I get the Store to use? + I could use ``Store.of(someobj).add(self)`` but ``someobj`` might not be in the right store. + I could use the ``getUtility()`` trick, but it seems wrong that a database + module should be importing an interface from ``webapp``. diff --git a/explanation/url-traversal.rst b/explanation/url-traversal.rst index 18c6f34..f900c0a 100644 --- a/explanation/url-traversal.rst +++ b/explanation/url-traversal.rst @@ -42,7 +42,7 @@ This specifies: 2. attribute_to_parent defines the attribute of this interface that refers to the parent interface. Remember, we are starting from a leaf, and working back to the root URL. -3. We are adding comments/${id } to the path of the parent Interface. +3. We are adding comments/${id} to the path of the parent Interface. Where id is the id field of the instance. 4. rootsite is the subdomain this URL should be rooted at @@ -84,8 +84,8 @@ Next, you need to implement the factory: The function decorators helps reduce the ZCML needed for registration, they specify: -1. The interface that the adapter will provide: \`ICanonicalUrlData`. -2. The objects that the adapter works with: \`IBranchTarget`. +1. The interface that the adapter will provide: ``ICanonicalUrlData``. +2. The objects that the adapter works with: ``IBranchTarget``. Note that this is using the context of the view to get the ICanonicalUrlData. If it were only using the view, you'd get infinite diff --git a/how-to/develop-with-buildd.rst b/how-to/develop-with-buildd.rst new file mode 100644 index 0000000..7a660e4 --- /dev/null +++ b/how-to/develop-with-buildd.rst @@ -0,0 +1,379 @@ +How to develop with Buildd +========================== + +LXD VM Support +-------------- + +This is now on stable and allows for management of VMs with the same LXD CLI. + +For now, we need to use the ``images:`` source for images, rather than the ``ubuntu:`` images. The default ubuntu images do not have the LXD agent preinstalled. Once they do, this gets a bit simpler. + +It is also slightly simpler to use the ``ubuntu`` user, as it is already available in the image and doesn't require as many hoops jumped to get ``uid``/``gid`` mapping to work. + +Create a LXD profile for VMs +---------------------------- + +This is a convenience helper profile for VMs that will add users and run ``cloud-init`` for installing the LXD VM agent. It is not required and you can pass the options on the ``lxc`` command. + +The password for the user can be generated using: + +.. code-block:: sh + + $ mkpasswd -m sha-512 + +``mkpasswd`` lives in the ``whois`` package. + +For now, we are using the LXD provided cloud images as it has the LXD agent and ``cloud-init`` preinstalled. This requires a smaller LXD profile, but needs some extra commands afterwards. + +To create this run: + +.. code-block:: sh + + $ lxc profile create vm + +and then: + +.. code-block:: sh + + $ lxc profile edit vm + +.. code-block:: yaml + + name: vm + config: + limits.cpu: "2" + limits.memory: 4GB + user.vendor-data: | + #cloud-config + package_update: true + ssh_pwauth: yes + packages: + - openssh-server + - byobu + - language-pack-en + users: + - name: "ubuntu" + passwd: "" + lock_passwd: false + groups: lxd + shell: /bin/bash + sudo: ALL=(ALL) NOPASSWD:ALL + ssh-import-id: + description: "" + devices: + config: + source: cloud-init:config + type: disk + eth0: + name: eth0 + nictype: bridged + parent: lxdbr0 + type: nic + work: + path: + source: + type: disk + + +Start the LXD VM +---------------- + +Start a VM via downloading the images: cloud image + +``lxc launch images:ubuntu//cloud -p vm -p default --vm`` + +This will take a while to settle. You can monitor its progress with ``lxd console ``. + +Once it has complete cloud-init, you should then see an IP assigned in ``lxc list`` and be able to execute a bash shell with ``lxc exec bash``. + +Configure password and ssh +-------------------------- + +This should be done by the cloud-init config in the profile, but the package is not installed at the time that is run, so do it afterwards manually: + +.. code-block:: sh + + $ lxc exec sudo passwd ubuntu + $ lxc exec --user 1000 "/usr/bin/ssh-import-id" + + +This will not be required once we can use the ``ubuntu:`` image source in LXD. + +Launchpad Buildd +---------------- + +We'll need a clone of this and then build and install it for running. + +Branch +------ + +.. code-block:: sh + + $ sudo apt install git + $ git clone https://git.launchpad.net/launchpad-buildd + +Install dependencies +-------------------- + +.. code-block:: sh + + $ cd launchpad-buildd + $ sudo apt-add-repository ppa:launchpad/ubuntu/buildd-staging + $ sudo apt-add-repository ppa:launchpad/ubuntu/ppa + $ vi /etc/apt/sources.list.d/launchpad-ubuntu-ppa-bionic.list + $ sudo apt update + $ sudo apt build-dep launchpad-buildd fakeroot + $ sudo apt install -f + +Note: if ``fakeroot`` can't be found try: + +.. code-block:: sh + + $ sudo sed -Ei 's/^# deb-src /deb-src /' /etc/apt/sources.list + $ sudo apt-get update + $ sudo apt build-dep launchpad-buildd fakeroot + $ sudo apt install -f + +Make and install the package +---------------------------- + +.. code-block:: sh + + $ cd launchpad-buildd + $ make + $ cd .. + $ sudo dpkg -i ./python3-lpbuildd__all.deb ./launchpad-buildd__all.deb + +Run the buildd +-------------- + +Edit ``/etc/launchpad-buildd/default`` and change ``ntphost`` to something valid (``ntp.ubuntu.com`` should work) + +.. code-block:: sh + + $ sudo mkdir -p /var/run/launchpad-buildd + $ sudo chown ubuntu: /var/run/launchpad-buildd + $ cd launchpad-buildd + $ /usr/bin/python3 /usr/bin/twistd --no_save --pidfile /var/run/launchpad-buildd/default.pid --python /usr/lib/launchpad-buildd/buildd-slave.tac -n + +Making changes +-------------- + +The package is installed as a system deb, so to make changes you will need to rebuild and reinstall the package following the 'Make and install' section. + +Testing +------- + +You probably want the next section (:ref:`Configuring Launchpad `) at this point, but if you are doing any buildd development and need to test your changes without having to have the whole system running, you can use the XML-RPC interface to cause builds to happen. + +Getting a base image +-------------------- + +First, we need a base image to use for the builds. Usually, this is pulled as part of a build, but if we don't have Launchpad involved, we need to set this up manually. +To download the image we need to set up the ``ubuntu-archive-tools``. + +.. code-block:: sh + + $ git clone https://git.launchpad.net/ubuntu-archive-tools + $ sudo apt install python3-launchpadlib python3-ubuntutools + +Now we can download the image. Please note that there are two types of images: ``chroot`` and ``lxd`` images. +``chroot`` backend is only used for ``binarypackagebuilds`` and ``sourcepackagerecipebuilds`` while all the other build types use an ``lxd`` image. + +To download the ``lxd`` image proceed as follows: + +.. code-block:: sh + + $ ./manage-chroot -s bionic -a amd64 -i lxd get + $ sha1sum livecd.ubuntu-base.lxd.tar.gz + $ mv livecd.ubuntu-base.lxd.tar.gz + +To download a ``chroot`` image proceed as follows: + +.. code-block:: sh + + $ ./manage-chroot -s bionic -a amd64 get + $ sha1sum livecd.ubuntu-base.rootfs.tar.gz + $ mv livecd.ubuntu-base.rootfs.tar.gz + +Now we should copy the downloaded image to the builder file cache to be picked up during the build phase if you are running your builder locally. + +.. code-block:: sh + + $ sudo cp /home/buildd/filecache-default + $ sudo chown buildd: /home/buildd/filecache-default/ + +Running a build locally +----------------------- + +You can try running a build via the XML-RPC interface. Start a Python/IPython repl and run. + +.. code-block:: python + + import xmlrpclib + proxy = xmlrpclib.ServerProxy("http://localhost:8221/rpc") + proxy.status() + +Assuming that works, a sample build can be created using (relying on the OCI capabilities being merged into launchpad-buildd): +Note that if we are using the ``lxd`` backend we should specify that in our build ``args`` by adding ``"image_type": "lxd"``. + +.. code-block:: python + + proxy.build('1-3', 'oci', '', {}, {'name': 'test-build', 'series': 'bionic', 'arch_tag': 'amd64', 'git_repository': 'https://github.com/tomwardill/test-docker-repo.git', 'archives': ['deb http://archive.ubuntu.com/ubuntu bionic main restricted', 'deb http://archive.ubuntu.com/ubuntu bionic-updates main restricted', 'deb http://archive.ubuntu.com/ubuntu bionic universe']}) + +.. _configuring-launchpad: + +Configuring Launchpad +--------------------- + +Change ``https://launchpad.test/ubuntu/+pubconf`` as admin from ``archive.launchpad.test`` to ``archive.ubuntu.com``. + +In ``launchpad/launchpad/configs/development/launchpad-lazr.conf`` change: + +1: ``git_browse_root`` from ``https://git.launchpad.test/`` to ``http://git.launchpad.test:9419/`` + +2: ``git_ssh_root`` from ``git+ssh://git.launchpad.test/`` to ``git+ssh://git.launchpad.test:9422/`` + +3: ``builder_proxy_host`` from ``snap-proxy.launchpad.test`` to ``none`` + +4: ``builder_proxy_port`` from ``3128`` to ``none`` + +In ``launchpad/launchpad/lib/lp/services/config/schema-lazr.conf`` under the ``[oci]`` tag add a pair of private and public keys in order to be able to add OCI credentials, valid example below: + +1: ``registry_secrets_private_key``: ``U6mw5MTwo+7F+t86ogCw+GXjcoOJfK1f9G/khlqhXc4=`` + +2: ``registry_secrets_public_key``: ``ijkzQTuYOIbAV9F5gF0loKNG/bU9kCCsCulYeoONXDI=`` + + + +Running soyuz and adding data +----------------------------- + +First, you'll need to run some extra bits in Launchpad: + +.. code-block:: sh + + $ utilities/start-dev-soyuz.sh + $ utilities/soyuz-sampledata-setup.py + $ make run + +Image Setup +----------- + +Consult the 'Launchpad Configuration' section of :doc:`use-soyuz-locally` to do the correct ``manage-chroot`` dance to register an image with launchpad. Without this, you will have no valid buildable architectures. + +User setup +---------- + +It's convenient to add your user to the correct groups, so you can interact with it, without being logged in as admin. + + 1. Log in as admin + 2. Go to https://launchpad.test/~launchpad-buildd-admins and add your user + 3. Go to https://launchpad.test/~ubuntu-team and add your user + +Registering the buildd +---------------------- + +The buildd that you have just installed needs registering with Launchpad so that builds can be dispatched to it. + + 1. Go to https://launchpad.test/builders + + 2. Press 'Register a new build machine' + + 3. Fill in the details. + + - The 'URL' is probably ``http://:8221``. + + - You can make the builder be either virtualized or non-virtualized, but each option requires some extra work. Make sure you understand what's needed in the case you choose. + + - Most production builders are virtualized, which means that there's machinery to automatically reset them to a clean VM image at the end of each build. To set this up, ``builddmaster.vm_resume_command`` in your config must be set to a command which ``buildd-manager`` can run to reset the builder. If the VM reset protocol is 1.1, then the resume command is expected to be synchronous: once it returns, the builder should be running. If the VM reset protocol is 2.0, then the resume command is expected to be asynchronous, and the builder management code is expected to change the builder's state from ``CLEANING`` to ``CLEAN`` using the webservice once the builder is running. + + - Non-virtualized builders are much simpler: ``launchpad-buildd`` is cleaned synchronously over XML-RPC at the end of each build, and that's it. If you use this, then you must be careful not to run any untrusted code on the builder (since a ``chroot`` or container escape could compromise the builder), and you'll need to uncheck "Require virtualized builders" on any PPAs, live file systems, recipes, etc. that you want to be allowed to build on this builder. + + 4. After 30 seconds or so, the status of the builder on the builders page should be 'Idle'. This page does not auto-update, so refresh! + +Running a build on qastaging +---------------------------- + +We can use XML-RPC to interact also with qastaging/staging builders. +First of all, we should be able to ssh into our bastion and ssh into the ``launchpad-buildd-manager`` unit since it's the one that has the firewall rules to talk with builders. + +Then we should follow the same procedure to get the correct ``sha1sum`` of a backend image. + +.. code-block:: sh + + $ ./manage-chroot -s bionic -a amd64 -i lxd get + $ sha1sum livecd.ubuntu-base.lxd.tar.gz + +We want to call the ``manage-chroot`` command to download the LXD image from our database, in this way we can compute the ``sha1sum`` on it and +we can pass all the arguments that we need to call the ``ensurepresent`` function. The arguments that the function takes are the ``sha1sum`` hash +of the image we want, the URL from which we retrieved the image, username and password. +At this point we should select the builder that we want to interact with. We can navigate to ``https://qastaging.launchpad.net/builders/qastaging-bos03-amd64-001`` and get the builder location. +In this example ``http://qastaging-bos03-amd64-001.vbuilder.qastaging.bos03.scalingstack:8221`` + +.. code-block:: python + + import xmlrpclib + proxy = xmlrpclib.ServerProxy("http://qastaging-bos03-amd64-001.vbuilder.qastaging.bos03.scalingstack:8221/rpc") + proxy.status() + + # Inject the backend image we retrieved before. + proxy.ensurepresent("", "", "admin", "admin") + + # Start the build. + proxy.build('1-3', 'snap', '', {}, {'name': 'test-build', 'image_type': 'lxd', 'series': 'bionic', 'arch_tag': 'amd64', 'git_repository': 'https://github.com/tomwardill/test-docker-repo.git', 'archives': ['deb http://archive.ubuntu.com/ubuntu bionic main restricted', 'deb http://archive.ubuntu.com/ubuntu bionic-updates main restricted', 'deb http://archive.ubuntu.com/ubuntu bionic universe']}) + + # Clean the builder after a failure or a success. + proxy.clean() + +Proxy setup +----------- + +If our build needs to talk with the external world we will need to set up the proxy for our builders, with the word ``proxy`` we are referring here to both builder proxy and fetch service. +First of all, we need to pull the token to authenticate our ``launchpad-buildd-manager`` against the proxy. + +All these commands should be run on the ``launchpad-build-manager`` unit. + +.. code-block:: python + + admin_username = + admin_secret = + auth_string = f"{admin_username}:{admin_secret}".strip() + basic_token = base64.b64encode(auth_string.encode("ASCII")) + +We can retrieve ``builder_proxy_auth_api_admin_username`` and ``builder_proxy_auth_api_admin_secret`` values from the ``launchpad-buildd-manager`` configuration. +Once we have the basic token we can call the proxy, asking for a token: + +.. code-block:: sh + + $ curl -X POST http://builder-proxy-auth.staging.lp.internal:8080/tokens -H "Authorization: Basic " -H "Content-Type: application/json" -d '{"username": }' + +Now we have all the information that we need to populate ``args`` that we need in the ``build`` function to use the proxy. +These ``args`` are ``"proxy_url": "http://:@builder-proxy.staging.lp.internal:3128"`` and +``"revocation_endpoint": "http://builder-proxy-auth.staging.lp.internal:8080/tokens/"`` that we can assemble manually starting from +the information we retrieved before, ```` will be the name that we will pass to our build function (see the following code) and +the ```` will be the token we retrieved from the ``curl`` of the previous step. + +The modified ``build`` call will look like: + +.. code-block:: python + + # Start the build. + proxy.build( + 'app-name', + 'snap', + '', + {}, + {'name': 'app-name', + 'image_type': 'lxd', + 'series': 'bionic', + 'arch_tag': 'amd64', + 'git_repository': 'https://github.com/tomwardill/test-docker-repo.git', + 'archives': [ + 'deb http://archive.ubuntu.com/ubuntu bionic main restricted', + 'deb http://archive.ubuntu.com/ubuntu bionic-updates main restricted', + 'deb http://archive.ubuntu.com/ubuntu bionic universe' + ], + "proxy_url": "http://app-name:@builder-proxy.staging.lp.internal:3128", + "revocation_endpoint": "http://builder-proxy-auth.staging.lp.internal:8080/tokens/app-name" + }) diff --git a/how-to/getting-started.rst b/how-to/getting-started.rst index f9eaa75..34e6a17 100644 --- a/how-to/getting-started.rst +++ b/how-to/getting-started.rst @@ -8,3 +8,5 @@ Getting started getting running database-setup + develop-with-buildd + use-soyuz-locally diff --git a/how-to/operating-launchpad.rst b/how-to/operating-launchpad.rst index ea217c9..c7dca08 100644 --- a/how-to/operating-launchpad.rst +++ b/how-to/operating-launchpad.rst @@ -13,4 +13,6 @@ Operating Launchpad deploying-configuration-changes land-update-for-loggerhead transfer-project-ownership - create-bot-account \ No newline at end of file + create-bot-account + porting-builders-to-newer-ubuntu-versions + diff --git a/how-to/porting-builders-to-newer-ubuntu-versions.rst b/how-to/porting-builders-to-newer-ubuntu-versions.rst new file mode 100644 index 0000000..4a23bf9 --- /dev/null +++ b/how-to/porting-builders-to-newer-ubuntu-versions.rst @@ -0,0 +1,52 @@ +Porting builders to newer Ubuntu versions +========================================= + +QA Migration & Deployment +------------------------- + +There are following steps to porting builders to newer Ubuntu versions. + +- Porting `launchpad-buildd `_ and its dependencies to work on the target Ubuntu version. You can follow `lp-buildd docs `_ to develop and publish on buildd-staging PPA. + - Apart from the deb dependencies defined in `debian/control `_ in `launchpad-buildd `_, you would also need to make sure that deb packages of target ubuntu version are available for ``bzr-builder``, ``git-recipe-builder`` and ``quilt``. + - These dependencies are defined in `charm-launchpad-buildd-image-modifier `__ + +- Update the ``gss_series`` variable in `launchpad-mojo-specs `__. Run ``mojo run`` to deploy the config changes. + - PS: We use ``vbuilder`` branch for build farm mojo specs. + - You don't have to update the builder config to target Ubuntu version at this step. We first have to build an image and then update the builder configs. + +- Next step is to rebuild images. Currently `launchpad-mojo-specs `__ `(vbuilder branch)` uses 2 charms to rebuild images & sync images. You can either trigger a rebuild by following: `testing-on-qastaging `_ or use the ``sync-images`` action. + - `charm-glance-simplestreams-sync `_ provides a `sync-images` action that downloads the configured base images and calls a hook to run the image modifier charm. + - `charm-launchpad-buildd-image-modifier `__ has scripts that creates a qemu COW VM image for builders with all the needed dependencies and configuration. + +.. code-block:: sh + + juju actions --help + juju list-actions + juju run-action --verbose sync-images + + + +- Update the ``builder config`` to use target Ubuntu version in `launchpad-mojo-specs `_. Use ``mojo run`` to deploy the config changes. + +- You can either wait for builders to reset and pick the new image or reset them using `ubuntu archive tools `_ + +.. code-block:: sh + + ./manage-builders -l qastaging --disabled -a riscv64 --reset + +Notes & Helpful links +--------------------- + +- With Ubuntu Noble, ``lxd`` is no longer a part of the base image and is pre-baked. Refer this `commit `_ that pre-bakes ``lxd`` if not available. `launchpad-buildd `_ uses ``lxd`` to run builds. + +- `Setting up a user on QA staging `_ + +- `Debugging build farms `_ + +- Charms can also be manually upgraded for a unit via: + +.. code-block:: sh + + juju upgrade-charm + + diff --git a/how-to/running.rst b/how-to/running.rst index 672a617..8c9074b 100644 --- a/how-to/running.rst +++ b/how-to/running.rst @@ -265,6 +265,8 @@ Finally, build the database schema (this may take several minutes): $ make schema +If you encounter an error while building Python wheels, see :ref:`pynacl-fix`. + Running ======= @@ -417,6 +419,27 @@ the bridge interface on your computer): sudo ufw allow in on lxdbr0 sudo ufw route allow in on lxdbr0 +.. _pynacl-fix: + +Error building Python wheels +---------------------------- + +When running ``make schema`` on some machines, ``pynacl`` `fails to build `_, leading to ``ERROR: Failed building wheel for pynacl``. + +If you encounter this issue, try running the following: + +.. code-block:: shell-session + + $ sudo apt install --yes libsodium-dev + +Then add the following line to the ``Makefile`` under the ``PIP_ENV`` commands: + +.. code-block:: shell-session + + PIP_ENV += SODIUM_INSTALL=system + +Then run `make schema` again. + Email ----- diff --git a/how-to/use-soyuz-locally.rst b/how-to/use-soyuz-locally.rst new file mode 100644 index 0000000..67a9bb0 --- /dev/null +++ b/how-to/use-soyuz-locally.rst @@ -0,0 +1,140 @@ +How to use Soyuz locally +======================== + +.. include:: ../includes/important_not_revised.rst + +You're going to run Soyuz in a branch you create for the purpose. To get the whole experience, you'll also be installing the builder-side ``launchpad-buildd`` package on your system. + +Initial setup +------------- + + * Run ``utilities/start-dev-soyuz.sh`` to ensure that some Soyuz-related services are running. Some of these may already be running, in which case you'll get some failures that are probably harmless. Note: these services eat lots of memory. + * Once you've set up your test database, run ``utilities/soyuz-sampledata-setup.py -e you@example.com`` (where ''you@example.com'' should be an email address you own and have a GPG key for). This prepares more suitable sample data in the ``launchpad_dev`` database, including recent Ubuntu series. If you get a "duplicate key" error, ``make schema`` and run again. + * `make run` (or if you also want to use codehosting, `make run_codehosting`—some services may fail to start up because you already started them, but it shouldn't be a problem). + * Open https://launchpad.test/~ppa-user/+archive/test-ppa in a browser to get to your pre-made testing PPA. Log in with your own email address and password ''test''. This user has your GPG key associated, has signed the Ubuntu Code of Conduct, and is a member ``ubuntu-team`` (conferring upload rights to the primary archive). + + +Extra PPA dependencies +^^^^^^^^^^^^^^^^^^^^^^ + +The testing PPA has an external dependency on Lucid. If that's not enough, or not what you want: + + * Log in as `admin@canonical.com:test` (I suggest using a different browser so you don't break up your ongoing session). + * Open https://launchpad.test/~ppa-user/+archive/test-ppa/+admin + * Edit external dependencies. They normally look like: + + .. code-block:: sh + + deb http://archive.ubuntu.com/ubuntu %(series)s main restricted universe multiverse + + +Set up a builder +---------------- + +Set up for development +^^^^^^^^^^^^^^^^^^^^^^ + +If you are intending to do any development on ``launchpad-buildd`` or similar, you possibly want :doc:`develop-with-buildd`. + +Installation +^^^^^^^^^^^^ + + * Create a new focal virtual-machine with ``kvm`` (recommended), or alternatively a focal ``lxc`` container. If using lxc, set ``lxc.aa_profile = unconfined`` in ``/var/lib/lxc/container-name/config`` which is required to disable ``AppArmor`` support. + +If you are running Launchpad in a container, you will more than likely want your VMs network bridged on ``lxcbr0``. + +In your builder VM/lxc: + +.. code-block:: sh + + $ sudo apt-add-repository ppa:launchpad/buildd-staging + $ sudo apt-get update + $ sudo apt-get install launchpad-buildd bzr-builder quilt binfmt-support qemu-user-static + +Alternatively, launchpad-buildd can be built from ``lp:launchpad-buildd`` with ``dpkg-buildpackage -b``. + + * Edit ``/etc/launchpad-buildd/default`` and make sure ``ntphost`` points to an existing NTP server. You can check the `NTP server pool `_ to find one near you. + +To run the builder by default, you should make sure that other hosts on the Internet cannot send requests to it! Then: + +.. code-block:: sh + + $ echo RUN_NETWORK_REQUESTS_AS_ROOT=yes > /etc/default/launchpad-buildd + +Launchpad Configuration +^^^^^^^^^^^^^^^^^^^^^^^ + +From your host system: + + * Get an Ubuntu ``buildd chroot`` from Launchpad, using ``manage-chroot`` from ``https://code.launchpad.net/+branch/ubuntu-archive-tools|lp:ubuntu-archive-tools``: + * ``manage-chroot -s precise -a i386 get`` + * ``LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1 manage-chroot -l dev -s precise -a i386 -f chroot-ubuntu-precise-i386.tar.bz2 set`` + * Register a new builder with the URL pointed to ``http://YOUR-BUILDER-IP:8221/`` (https://launchpad.test/builders/+new) + +Shortly thereafter, the new builder should report a successful status of 'idle'. + +If you want to test just the builder without a Launchpad instance, then, instead of using ``manage-chroot -l dev set``, you can copy the ``chroot`` tarball to ``/home/buildd/filecache-default/``; the base name of the file should be its ``sha1sum``. You'll need to copy any other needed files (e.g. source packages) into the cache in the same way. You can then send XML-RPC instructions to the builder as below. + +Drive builder through RPC +------------------------- + +With librarian running, fire up a ``python3`` shell and: + +.. code-block:: python + + from xmlrpc.client import ServerProxy + proxy = ServerProxy('http://localhost:8221/rpc') + proxy.ensurepresent('d267a7b39544795f0e98d00c3cf7862045311464', 'http://launchpad.test:58080/93/chroot-ubuntu-lucid-i386.tar.bz2', '', '') + proxy.build('1-1', 'translation-templates', 'd267a7b39544795f0e98d00c3cf7862045311464', {}, + {'archives': ['deb http://archive.ubuntu.com/ubuntu/ lucid main'], 'branch_url': '/home/buildd/gimp-2.6.8'}) + proxy.status() + proxy.clean() # Clean up if it failed + +You may have to calculate a new ``sha1sum`` of the ``chroot`` file. + +Upload a source to the PPA +-------------------------- + + * Run ``scripts/process-upload.py /var/tmp/txpkgupload`` (creates hierarchy) + * Add to ``~/.dput.cf``: + + .. code-block:: yaml + + [lpdev] + fqdn = ppa.launchpad.test:2121 + method = ftp + incoming = %(lpdev)s + login = anonymous + + * Find a source package ``some_source`` with a changes file ``some_source.changes`` + * ``dput -u lpdev:~ppa-user/test-ppa/ubuntu some_source.changes`` + * ``scripts/process-upload.py /var/tmp/txpkgupload -C absolutely-anything -vvv # Accept the source upload.`` + * If this is your first time running soyuz locally, you'll also need to publish ubuntu: ``scripts/publish-distro.py -C`` + * Within five seconds of upload acceptance, the buildd should start building. Wait until it is complete (the build page will say "Uploading build"). + * ``scripts/process-upload.py -vvv --builds -C buildd /var/tmp/builddmaster # Process the build upload.`` + * ``scripts/process-accepted.py -vv --ppa ubuntu # Create publishings for the binaries.`` + * ``scripts/publish-distro.py -vv --ppa # Publish the source and binaries.`` + * Note that private archive builds will not be dispatched until their source is published. + +Build an OCI image +------------------ + + * Using Launchpad interface, create a new OCI project and a recipe for it. + * On the OCI Recipe page, click on "Request builds", and select which architectures should be built on the following screen. + * Once you have requested a build, you should run ``./cronscripts/process-job-source.py -v IOCIRecipeRequestBuildsJobSource`` to create builds for that build request. + * If you have builders idle, this should start the build. Make sure to have run ``utilities/start-dev-soyuz.sh``, and check builders status at ``/builders`` page. + * Once the build finishes, run ``./scripts/process-upload.py -M --builds /var/tmp/builddmaster/`` on Launchpad to make it collect the built layers and manifests. + * At this point, in each build page you should have the files listed. + * You can upload the built image to registry by running ``./cronscripts/process-job-source.py -v IOCIRegistryUploadJobSource`` in Launchpad. You can manage the push rules at OCI recipe's page, clicking at "Edit push rules" button. + +Dealing with the primary archive +-------------------------------- + + * ``dput lpdev:ubuntu some_source.changes`` + * ``scripts/process-upload.py -vvv /var/tmp/txpkgupload`` + * Watch the output -- the upload might end up in NEW. + * If it does, go to the queue and accept it. + * Your builder should now be busy. Once it finishes, the binaries might go into NEW. Accept them if required. + * ``scripts/process-accepted.py -vv ubuntu`` + * ``scripts/publish-distro.py -vv`` + * The first time, add ``-C`` to ensure a full publication of the archive. diff --git a/images/codehosting.png b/images/codehosting.png new file mode 100644 index 0000000..0cbffb5 Binary files /dev/null and b/images/codehosting.png differ diff --git a/reference/services/fetch-service.rst b/reference/services/fetch-service.rst index 8a108e0..a37e0b4 100644 --- a/reference/services/fetch-service.rst +++ b/reference/services/fetch-service.rst @@ -99,6 +99,11 @@ Deployment We deploy the fetch service using the specs defined in `fetch service mojo specs `_. +In order to be able to evaluate new fetch service versions, we use different +Snap channels for qastaging and production, so we are able to +test new releases. This information is both defined in above mentioned mojo +specs, and in `ST118 fetch service release process `_. + Qastaging ~~~~~~~~~ For qastaging deployment, SSH into