-
Notifications
You must be signed in to change notification settings - Fork 18
Enable logs to be stored for successful CI builds #944
Comments
@chavafg Can you take a look at this? |
I'm guessing this should really have been raised on https://github.com/clearcontainers/jenkins. /cc @grahamwhaley as this might have implications for the metrics system storage requirements. |
We should probably discuss and define which logs, and how much debug they have in them. @chavafg I think it was recently pointed out that the metrics CI logs were already pretty big, and I should check that, as that is not intentional. |
For reference, that magic script is https://github.com/clearcontainers/runtime/blob/master/data/collect-data.sh.in. @amshinde - can you give a concrete example where retaining logs would have helped? I'm not disagreeing that it's a good idea, but it would be good to explore if there are other ways to give you what you want. How long do we think we'll need to store logs? "Forever" probably won't cut it so would a month (4 releases) be sufficient do you think? But as @grahamwhaley's suggesting, I'm not sure we need to keep the logs as long as we can know the environment the tests ran in, to allow a test run to be recreated, namely:
As denoted by the checkboxes, the For reference, the output of the collect script when If we decide to store full logs for all PRs, we'll need something in place to warn about the |
Oh - we might also want to include |
Agree on logs and longevity - I'm going to presume Jenkins has some plugin or setting that can manage and expire the gathered results files - and we should look at that indeed (we do collect up the .csv results files for the metrics for instance at present, but do not expire them) |
procenv was the magic I was thinking of :-) |
Ah - soz - so much magic about! ;) |
I think @amshinde concern is to know the agent version, which at some point last week, we had a wrong version testing latest PRs. |
@chavafg - we could just run |
@jodh-intel yes, I think that would be the best. does
|
@chavafg - good point! No, it doesn't. I've had a think about this and I can think of two ways we could do this: The gross hackWe could capture the agent version by adding something like a "
But it's a hack ;) The slightly-less gross optionChange the runtime so that it loop-mounts the currently configured container image read-only (with That seems liked the best option but wdyt @grahamwhaley, @sboeuf, @sameo? |
I had sometime very recently also considered we could loop mount the |
I was having similar feelings about having that sort of code in the runtime too. That said, we do sort of have precedent if you look at I'm happy for us to have this purely in the collect script but, yes, if it doesn't go in the runtime, we need to remove the |
@chavafg @jodh-intel @grahamwhaley Gathering the agent version was one of the requirements I had in mind, as we were running with a wrong agent last week. What I really wanted to have a look at were the CRIO logs, to take a look at the lifecycle events in the log and see that the container storage driver passed is actually the one being used with crio. |
It looks like in the Jenkins 'discard old builds' option we may also have the ability to specify how long to keep artifacts for btw. |
…eout_docker ci: cleanup: add timeouts to docker on cleanups
Currently we can only retrieve the logs when a build has failed with Jenkins. We should be able to retrieve them for successful builds as well to able able to inspect if we are running with the correct environment.
The text was updated successfully, but these errors were encountered: