-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MIRI JWST Pipeline Notebook 1.6 Version Incompatibility Issue #8842
Comments
Thanks for reporting this! @kmacdonald-stsci - can you please take a look? |
Interesting. The page you provided has only up to 1.5.1, but this page (https://github.com/spacetelescope/jwst) says the latest version is 1.6 which was released about 10 days ago. hmm... I pip upgraded jwst this morning which automatically gives 1.6. The issue was resolved after downgrading to 1.4.1 version. |
Thanks for checking, but I was just tagging in one of our developers to take a look at your issue. :) To be clear - you saw the issue with jwst 1.16.0, and did not see when you re-installed 1.14.1? |
Oh, my mistake. Yes, it was 1.16.0 giving back issues, but 1.14.1 works just fine. |
I can reproduce the issue with one of the MIRI uncal files from the notebook, with multiprocessing on for ramp fitting: @hcha9 - it looks like you should be able to work around it with v1.16.0 by turning off multiprocessing for ramp fitting. Try commenting out this line in cell 16: |
There we go! Awesome! Thank you so much! I missed that part and works just fine with 1.16.0. |
One more follow up - it looks like this issue should be fixed when spacetelescope/stcal#289 is merged. Testing with stcal on that branch I do not see the initialization error with the above test case. |
This bug is caused by erroneously going into the CHARGELOSS computation here: It is because this default I found this potential bug while testing a currently open PR In this PR that value now defaults to |
Thank you so much for helping me to figure out bugs with the latest version! I just ran into another bug :
|
Sorry you're running into all these issues at once! This one:
is a known issue, relating to some release coordination between CRDS and the new pipeline version. The issue and fix are described here: |
I am going through this notebook to rerun the demo before loading our data, but it appears I had no issues with previous versions of JWST, but new ver 1.6 returns an error when processing the data. I reinstalled the previous version which worked just fine.
"https://github.com/spacetelescope/jwst-pipeline-notebooks/blob/main/notebooks/MIRI/JWPipeNB-MIRI-MRS.ipynb"
This was line 19.
"2024-09-30 01:18:15,568 - stpipe.Detector1Pipeline.ramp_fit - INFO - MIRI dataset has all pixels in the final group flagged as DO_NOT_USE.
2024-09-30 01:18:15,568 - stpipe.Detector1Pipeline.ramp_fit - INFO - Number of processors used for multiprocessing: 6
Error - [C:2582] pr->orig_gdq is NULL.
Error - [C:2582] pr->orig_gdq is NULL.
Error - [C:2582] pr->orig_gdq is NULL.
Error - [C:2582] pr->orig_gdq is NULL.
Error - [C:2582] pr->orig_gdq is NULL.
Error - [C:2582] pr->orig_gdq is NULL.
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
MemoryError: pr->orig_gdq is NULL.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/Cellar/[email protected]/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/pool.py", line 51, in starmapstar
return list(itertools.starmap(args[0], args[1]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hcha/jwst-env/lib/python3.12/site-packages/stcal/ramp_fitting/ols_fit.py", line 676, in ols_ramp_fit_single
image_info, integ_info, opt_info = ols_slope_fitter(
^^^^^^^^^^^^^^^^^
SystemError: returned a result with an exception set
"""
The above exception was the direct cause of the following exception:
SystemError Traceback (most recent call last)
Cell In[20], line 5
3 if dodet1:
4 for file in uncal_files:
----> 5 Detector1Pipeline.call(file, steps=det1dict, save_results=True, output_dir=det1_dir)
6 else:
7 print('Skipping Detector1 processing for SCI data')
File ~/jwst-env/lib/python3.12/site-packages/stpipe/step.py:697, in Step.call(cls, *args, **kwargs)
694 name = config.get("name", None)
695 instance = cls.from_config_section(config, name=name, config_file=config_file)
--> 697 return instance.run(*args)
File ~/jwst-env/lib/python3.12/site-packages/jwst/stpipe/core.py:100, in JwstStep.run(self, *args, **kwargs)
98 @wraps(Step.run)
99 def run(self, *args, **kwargs):
--> 100 result = super().run(*args, **kwargs)
101 if not self.parent:
102 log.info(f"Results used jwst version: {version}")
File ~/jwst-env/lib/python3.12/site-packages/stpipe/step.py:524, in Step.run(self, *args)
522 self.prefetch(*args)
523 try:
--> 524 step_result = self.process(*args)
525 except TypeError as e:
526 if "process() takes exactly" in str(e):
File ~/jwst-env/lib/python3.12/site-packages/jwst/pipeline/calwebb_detector1.py:149, in Detector1Pipeline.process(self, input)
147 ints_model = None
148 else:
--> 149 input, ints_model = self.ramp_fit(input)
151 # apply the gain_scale step to the exposure-level product
152 if input is not None:
File ~/jwst-env/lib/python3.12/site-packages/stpipe/step.py:524, in Step.run(self, *args)
522 self.prefetch(*args)
523 try:
--> 524 step_result = self.process(*args)
525 except TypeError as e:
526 if "process() takes exactly" in str(e):
File ~/jwst-env/lib/python3.12/site-packages/jwst/ramp_fitting/ramp_fit_step.py:464, in RampFitStep.process(self, step_input)
459 input_model_W = result.copy()
461 # Run ramp_fit(), ignoring all DO_NOT_USE groups, and return the
462 # ramp fitting arrays for the ImageModel, the CubeModel, and the
463 # RampFitOutputModel.
--> 464 image_info, integ_info, opt_info, gls_opt_model = ramp_fit.ramp_fit(
465 result, buffsize, self.save_opt, readnoise_2d, gain_2d,
466 self.algorithm, self.weighting, max_cores, dqflags.pixel,
467 suppress_one_group=self.suppress_one_group)
469 # Create a gdq to modify if there are charge_migrated groups
470 if self.algorithm == "OLS":
File ~/jwst-env/lib/python3.12/site-packages/stcal/ramp_fitting/ramp_fit.py:195, in ramp_fit(model, buffsize, save_opt, readnoise_2d, gain_2d, algorithm, weighting, max_cores, dqflags, suppress_one_group)
190 # Create an instance of the internal ramp class, using only values needed
191 # for ramp fitting from the to remove further ramp fitting dependence on
192 # data models.
193 ramp_data = create_ramp_fit_class(model, algorithm, dqflags, suppress_one_group)
--> 195 return ramp_fit_data(
196 ramp_data, buffsize, save_opt, readnoise_2d, gain_2d, algorithm, weighting, max_cores, dqflags
197 )
File ~/jwst-env/lib/python3.12/site-packages/stcal/ramp_fitting/ramp_fit.py:295, in ramp_fit_data(ramp_data, buffsize, save_opt, readnoise_2d, gain_2d, algorithm, weighting, max_cores, dqflags)
292 suppress_one_good_group_ramps(ramp_data)
294 # Compute ramp fitting using ordinary least squares.
--> 295 image_info, integ_info, opt_info = ols_fit.ols_ramp_fit_multi(
296 ramp_data, buffsize, save_opt, readnoise_2d, gain_2d, weighting, max_cores
297 )
298 gls_opt_info = None
300 return image_info, integ_info, opt_info, gls_opt_info
File ~/jwst-env/lib/python3.12/site-packages/stcal/ramp_fitting/ols_fit.py:111, in ols_ramp_fit_multi(ramp_data, buffsize, save_opt, readnoise_2d, gain_2d, weighting, max_cores)
108 return image_info, integ_info, opt_info
110 # Call ramp fitting for multi-processor (multiple data slices) case
--> 111 image_info, integ_info, opt_info = ols_ramp_fit_multiprocessing(
112 ramp_data, buffsize, save_opt, readnoise_2d, gain_2d, weighting, number_slices
113 )
115 return image_info, integ_info, opt_info
File ~/jwst-env/lib/python3.12/site-packages/stcal/ramp_fitting/ols_fit.py:169, in ols_ramp_fit_multiprocessing(ramp_data, buffsize, save_opt, readnoise_2d, gain_2d, weighting, number_slices)
167 ctx = multiprocessing.get_context("forkserver")
168 pool = ctx.Pool(processes=number_slices)
--> 169 pool_results = pool.starmap(ols_ramp_fit_single, slices)
170 pool.close()
171 pool.join()
File /usr/local/Cellar/[email protected]/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/pool.py:375, in Pool.starmap(self, func, iterable, chunksize)
369 def starmap(self, func, iterable, chunksize=None):
370 '''
371 Like map() method but the elements of the iterable are expected to
372 be iterables as well and will be unpacked as arguments. Hence
373 func and (a, b) becomes func(a, b).
374 '''
--> 375 return self._map_async(func, iterable, starmapstar, chunksize).get()
File /usr/local/Cellar/[email protected]/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/pool.py:774, in ApplyResult.get(self, timeout)
772 return self._value
773 else:
--> 774 raise self._value
SystemError: returned a result with an exception set"
The text was updated successfully, but these errors were encountered: