Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update environment.yml to make installation work again #36

Open
wants to merge 30 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
649094d
add gcc/g++ to environment.yml
ada52 Jul 26, 2023
092a3f2
I added comment to environment.yml
ada52 Jul 26, 2023
e0eec51
we commented electonn3 line in environment.yml file , because it failed
ada52 Jul 26, 2023
c6b3fd5
save
ada52 Jul 26, 2023
d3f568f
deleted a line which gave error compiling
ada52 Jul 26, 2023
35418c0
working environment
ada52 Jul 26, 2023
333d721
changed open3d version, python version
ada52 Jul 28, 2023
6983324
we changed scikit-learn version from 0.24.0 to 0.21.3 because of rfc …
ada52 Aug 2, 2023
a317ac3
Bug fix:rfc feature number and name changes
ada52 Aug 4, 2023
cb0dc58
syconn2 uses specific elektronn3 version, which requires specific ve…
ada52 Aug 4, 2023
8c3641b
Use custom elektronn3 repo with corrected knn.cxx path
erjel Aug 28, 2023
8715a54
doc: Mamba installation steps
erjel Aug 28, 2023
208dd6b
Use older open3d version to fix known glibc issue
erjel Aug 28, 2023
d88eb0b
open3d<=0.9 requires python 3.7
erjel Aug 28, 2023
45794af
bugfix in knossos_utils for python 3.7
erjel Aug 29, 2023
3e20171
Use legacy scikit-learn to prevent error with random forest classifier
erjel Aug 29, 2023
5c4c67d
bugfix: torch-sparse resulted in seg fault
erjel Aug 29, 2023
439d16e
bugfix: Make prediction work with legacy models from example data
erjel Aug 29, 2023
66cf5f2
Merge pull request #1 from erjel/install
ada52 Aug 30, 2023
5ab2eb5
eric's change was reversed, changed it back
ada52 Aug 30, 2023
2e59d9a
Merge branch 'master' of github.com:ada52/SyConn
ada52 Aug 31, 2023
afde928
bugfix: python2 does not support typing
erjel Sep 4, 2023
b1d3d99
doc: Write down principal steps for viewer setup
erjel Sep 4, 2023
bbc2e5a
typo
erjel Sep 4, 2023
52b43f2
Update instructions.rst
erjel Sep 7, 2023
4c517c2
Merge pull request #2 from erjel/knossos
ada52 Sep 7, 2023
4c43445
Update environment.yml to work again
spiralsim Jul 1, 2024
92b8e2e
Add more clarification for Anaconda and libmamba
spiralsim Jul 1, 2024
563a2f6
Add clarification for installing g++
spiralsim Jul 1, 2024
33424d5
Fix formatting bug with Anaconda link
spiralsim Jul 1, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 54 additions & 19 deletions docs/instructions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,21 +16,29 @@ More details are linked in the respective chapters.
Installation
------------

- Python 3.7
- The whole pipeline was designed and tested on Linux systems
- linux-64 or equivalent OS (ex. WSL)
- This is required because the dependency `menpo::osmesa <https://anaconda.org/menpo/osmesa>`__ is currently only available on linux-64.
- Tested on Ubuntu 22.04.3 LTS

Before you can set up SyConn, ensure that the
`conda <https://docs.conda.io/projects/conda/en/latest/user-guide/install/>`__
package manager is installed on your system. Then you can install SyConn
and all of its dependencies into a new conda
Before you can set up SyConn, ensure that the latest version of the conda package manager is installed on your system.
`Anaconda <https://anaconda.org>`__ with `libmamba <https://www.anaconda.com/blog/a-faster-conda-for-a-growing-community>`__ seems to be the fastest option.

You may also need to install g++:

::

sudo apt-get install g++

Then you can install SyConn and all of its dependencies into a new conda
`environment <https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/environments.html>`__
named “syconn2” by running:

::

git clone https://github.com/StructuralNeurobiologyLab/SyConn
cd SyConn
conda env create -f environment.yml -n syconn2 python=3.7
conda config --set solver libmamba # If libmamba isn't already set as default
conda env create -n syconn2 -f environment.yml
conda activate syconn2
pip install -e .

Expand All @@ -43,12 +51,6 @@ command with:

pip install .

To update the environment, e.g. if the environment file changed, use:

::

conda env update --name syco --file environment.yml --prune

If you encounter

::
Expand Down Expand Up @@ -207,32 +209,65 @@ After initialization of the SDs (cell and sub-cellular structures, step
SyConn KNOSSOS viewer
---------------------

This setup assumes that you run Linux (or WSL on Windows). If you don't
have requried packages installed, you additional need ``sudo`` rights or
ask your system administrator to install them for you.

The following packages have to be available in the system’s python2
interpreter (will differ from the conda environment):

- numpy
- lz4
- requests

One approach is to install them via `pip`:
::

wget -P ~/.local/lib https://bootstrap.pypa.io/pip/2.7/get-pip.py
python2 ~/.local/lib/get-pip.py --user
python2 -m pip install numpy requests lz4

In order to inspect the resulting data via the SyConnViewer
KNOSSOS-plugin follow these steps:

- Wait until ``start.py`` finished. For starting the server manually
run ``syconn.server --working_dir=<path>`` which executes
``syconn/kplugin/server.py`` and allows to visualize the analysis
run ``syconn.server --working_dir=<path>`` in the syconn conda environment
which executes ``syconn/analysis/server.py`` and allows to visualize the analysis
results of the working directory at (``<path>``) in KNOSSOS. The
server address and port will be printed.

- Download and run the nightly build of KNOSSOS
(https://github.com/knossos-project/knossos/releases/tag/nightly)
- Download and run version 5.1 of KNOSSOS
(https://github.com/knossos-project/knossos/releases/tag/v5.1)
::

wget https://github.com/knossos-project/knossos/releases/download/v5.1/linux.KNOSSOS-5.1.AppImage
chmod u+x linux.KNOSSOS-5.1.AppImage
./linux.KNOSSOS-5.1.AppImage

Possible pitfalls:
``libpython2.7.so.1.0: cannot open shared object file: No such file or directory``
you need to install the python-devtool package on your system:
::

sudo apt install libpython2.7

If the AppImage complains about missing ``fusermount`` you need to install it (i.e. on Ubuntu 22.04)
::

sudo apt install libfuse2

if AppImage complains about ``error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory`` you need to [install](https://stackoverflow.com/a/68666500) it:
::

sudo apt install libgl1

- In KNOSSOS -> File -> Choose Dataset -> browse to your working
directory and open ``knossosdatasets/seg/mag1/knossos.conf`` with
enabled ‘load_segmentation_overlay’ (at the bottom of the dialog).

- Then go to Scripting (top row) -> Run file -> browse to
``syconn/kplugin/syconn_knossos_viewer.py``, open it and enter the
port and address of the syconn server.
``syconn/analysis/syconn_knossos_viewer.py``, open it and enter the
port and address of the syconn server as printed in the terminal.

- After the SyConnViewer window has opened, the selection of
segmentation fragments in the slice-viewports (exploration mode) or
Expand Down
14 changes: 6 additions & 8 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,10 @@ dependencies:
# - pytorch-sparse

# From conda-forge and defaults
- python >= 3.9 # (3.6 should also work)
- python >= 3.6 # Setting python=3.7 explicitly may cause the "Solving environment" step to hang
- pip
- setuptools < 70 # setuptools >= 70.0.0 throws "cannot import name 'packaging' from 'pkg_resources'"
#- gxx_linux-64
- lemon
- vigra
- freeglut
Expand Down Expand Up @@ -79,13 +81,10 @@ dependencies:
- open3d
- zmesh
- plyfile
- torch_geometric == 2.0.2 # 2.0.3 is incompatible with lcp.knn.quantized_sampling
- --find-links https://data.pyg.org/whl/torch-1.12.0+cu116.html
- torch-sparse
- torch-scatter
- torch_geometric == 2.0.2
# Pre-release packages
- git+https://github.com/ELEKTRONN/elektronn3.git@syconn2#egg=elektronn3
- git+https://github.com/knossos-project/knossos_utils.git@syconn2#egg=knossos_utils
- git+https://github.com/mpinb/elektronn3.git@syconn2-mod#egg=elektronn3
- git+https://github.com/mpinb/knossos_utils.git@syconn2-mod#egg=knossos_utils
- git+https://github.com/StructuralNeurobiologyLab/[email protected]#egg=MorphX

# cloud-volume >=4 throws an error in simple_merge during np.concatenate if any skeleton has no vertices
Expand All @@ -99,4 +98,3 @@ dependencies:

#for skeletonisation
- fill-voids

3 changes: 1 addition & 2 deletions syconn/analysis/syconn_knossos_viewer.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
# Copyright (c) 2016 - now
# Max-Planck-Institute of Neurobiology, Munich, Germany
# Authors: Philipp Schubert, Joergen Kornfeld
from typing import Dict, Any

from PythonQt import QtGui, Qt, QtCore
from PythonQt.QtGui import QTableWidget, QTableWidgetItem
Expand All @@ -27,7 +26,7 @@ class SyConnGateInteraction(object):
"""
Query the SyConn backend server.
"""
ct_from_cache: Dict[Any, Any]
ct_from_cache = {} # type: Dict[Any, Any]

def __init__(self, server, synthresh=0.5, axodend_only=True):
self.server = server
Expand Down
7 changes: 4 additions & 3 deletions syconn/extraction/cs_processing_steps.py
Original file line number Diff line number Diff line change
Expand Up @@ -1411,16 +1411,17 @@ def synssv_o_features(synssv_o: segmentation.SegmentationObject) -> list:
Returns:
list
"""
features = [synssv_o.size, synssv_o.mesh_area]
# print(synssv_o.attr_dict.keys())
features = [synssv_o.size, synssv_o.mesh_area, synssv_o.attr_dict["syn_type_sym_ratio"]] #NOTE(ada): we need to delete 3

partner_ids = synssv_o.attr_dict["neuron_partners"]
for i_partner_id, partner_id in enumerate(partner_ids):
features.append(synssv_o.attr_dict["n_mi_objs_%d" % i_partner_id])
features.append(synssv_o.attr_dict["n_mi_vxs_%d" % i_partner_id])
features.append(synssv_o.attr_dict["min_dst_mi_nm_%d" % i_partner_id])
#features.append(synssv_o.attr_dict["min_dst_mi_nm_%d" % i_partner_id])
features.append(synssv_o.attr_dict["n_vc_objs_%d" % i_partner_id])
features.append(synssv_o.attr_dict["n_vc_vxs_%d" % i_partner_id])
features.append(synssv_o.attr_dict["min_dst_vc_nm_%d" % i_partner_id])
#features.append(synssv_o.attr_dict["min_dst_vc_nm_%d" % i_partner_id])
return features


Expand Down
2 changes: 1 addition & 1 deletion syconn/extraction/find_object_properties_C.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ ctypedef fused n_type:

ctypedef vector[int] int_vec
ctypedef vector[int_vec] int_vec_vec
ctypedef vector[n_type[:, :, :]] uintarr_vec

ctypedef vector[unordered_map[uint64_t, int_vec]] umvec_rc
ctypedef vector[unordered_map[uint64_t, int_vec_vec]] umvec_bb
ctypedef vector[unordered_map[uint64_t, int]] umvec_size
Expand Down
13 changes: 7 additions & 6 deletions syconn/handler/prediction_pts.py
Original file line number Diff line number Diff line change
Expand Up @@ -1926,7 +1926,8 @@ def predict_cmpt_ssd(ssd_kwargs, mpath: Optional[str] = None, ssv_ids: Optional[
mpath = os.path.expanduser(mpath)
if os.path.isdir(mpath):
# multiple models
mpaths = glob.glob(mpath + '*/state_dict.pth')
#mpaths = glob.glob(mpath + '*/state_dict.pth')
mpaths = glob.glob(mpath + '*.pth')
else:
# single model
mpaths = [mpath]
Expand Down Expand Up @@ -1967,7 +1968,7 @@ def predict_cmpt_ssd(ssd_kwargs, mpath: Optional[str] = None, ssv_ids: Optional[
batchsizes[ctx] = int(batchsizes[ctx]*default_kwargs['bs'])
default_kwargs['bs'] = batchsizes
out_dc = predict_pts_plain(ssd_kwargs,
model_loader=get_cmpt_model_pts,
model_loader=get_cpmt_model_pts_OLD,
loader_func=pts_loader_cpmt,
pred_func=pts_pred_cmpt,
postproc_func=pts_postproc_cpmt,
Expand Down Expand Up @@ -1997,7 +1998,7 @@ def get_cpmt_model_pts_OLD(mpath: Optional[str] = None, device='cuda', pred_type
mpath = os.path.expanduser(mpath)
if os.path.isdir(mpath):
# multiple models
mpaths = glob.glob(mpath + '*/*.pth')
mpaths = glob.glob(mpath + '*.pth')
else:
# single model, must contain 'cmpt' in its name
mpaths = [mpath]
Expand Down Expand Up @@ -2223,9 +2224,9 @@ def pts_pred_cmpt(m, inp, q_out, d_out, q_cnt, device, bs):
high = bs * (ii + 1)
with torch.no_grad():
# transpose is required for lcp architectures
g_inp = [torch.from_numpy(i[low:high]).to(device).float().transpose(1, 2) for i in model_inp]
g_inp = [torch.from_numpy(i[low:high]).to(device).float() for i in model_inp]
out = m[batch_progress[2]](*g_inp)
out = out.transpose(1, 2).cpu().numpy()
out = out.cpu().numpy()
masks = batch_mask[low:high]
# filter vertices which belong to sv (discard predictions for cell organelles)
out = out[masks]
Expand Down Expand Up @@ -2435,7 +2436,7 @@ def get_cmpt_kwargs(mdir: str) -> Tuple[dict, dict]:
ctx = int(re.findall(r'_ctx(\d+)_', mdir)[-1])
feat_dim = int(re.findall(r'_fdim(\d+)', mdir)[-1])
class_num = int(re.findall(r'_cnum(\d+)', mdir)[-1])
pred_type = re.findall(r'_types([^_]+)_', mdir)[-1]
pred_type = re.findall(r'_t([^_]+)_', mdir)[-1]
batchsize = int(re.findall(r'_bs(\d+)_', mdir)[-1])
# TODO: Fix neighbor_nums or create extra model
mkwargs = dict(input_channels=feat_dim, output_channels=class_num, use_norm=use_norm, use_bias=use_bias,
Expand Down