Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added general troubleshooting page #20

Merged
merged 64 commits into from
Jun 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
1ee4849
Added Farm landing page information
Posmani Jun 24, 2024
dfa0082
Added general troubleshooting page
Jun 24, 2024
b2eb15d
module system: swap from lmod descriptions to tmod, remove Franklin-s…
camillescott Jun 24, 2024
bf1ee12
Redone landing page
cbookman Jun 24, 2024
404cb22
Merge remote-tracking branch 'origin/main' into camille/general/software
camillescott Jun 24, 2024
243d7d0
Added Netbox Procedure
Jun 24, 2024
b4c6c80
Made suggested changes
Jun 24, 2024
08df3f4
Made suggested changes again
johnmc-ucdavis Jun 24, 2024
adea139
Complied with all of the taskmaster's strict requests. Added spaces a…
cbookman Jun 24, 2024
f816734
Made suggested changes again
johnmc-ucdavis Jun 24, 2024
68af5e3
Made suggested changes once more
johnmc-ucdavis Jun 24, 2024
b6038a2
Forgot the h5 to h4 edit. Whoops.
cbookman Jun 24, 2024
8e8ea27
Merge pull request #22 from ucdavis/charles-branch
camillescottatwork Jun 24, 2024
81915db
software index page
camillescott Jun 24, 2024
aaa682f
Add macOS DS_Store files to gitignore
camillescott Jun 24, 2024
36c383c
access and farm
Posmani Jun 24, 2024
8de05e5
farm resources
Posmani Jun 24, 2024
fa78a25
add windows file nonsense to gitattributes
Posmani Jun 24, 2024
66bd9ac
Added more details
johnmc-ucdavis Jun 24, 2024
afd2cd7
How to add components to a device
Jun 24, 2024
43d2d4d
Added Franklin to introductory page. Sigh. Added account request page.
cbookman Jun 24, 2024
a714f1a
adjusted some formatting
Jun 24, 2024
646d2fc
Added Kerberos stuff
johnmc-ucdavis Jun 24, 2024
bb5212c
Added LSSC0 to account request form. This might be handy.
cbookman Jun 24, 2024
d1dd9b4
Consistency tweak
johnmc-ucdavis Jun 25, 2024
e7f8d91
[software] add software policy, landing page
camillescott Jun 25, 2024
0f3a9c4
[assets] update logos and favicon
camillescott Jun 25, 2024
d16ab89
[franklin/software] fix broken links in cryoem page
camillescott Jun 25, 2024
16687ae
Merge pull request #26 from ucdavis/charles-branch
camillescottatwork Jun 25, 2024
1d9f48c
Merge branch 'main' of github.com:ucdavis/hpccf-docs into camille/gen…
camillescott Jun 25, 2024
2dc97ce
Merge branch 'main' into Parwana/Farm-index
Posmani Jun 25, 2024
edf9948
Merge branch 'main' of github.com:ucdavis/hpccf-docs into Parwana/Far…
Posmani Jun 25, 2024
fdffe85
netbox component connections
Jun 25, 2024
289dfa2
Merge pull request #17 from ucdavis/Parwana/Farm-index
camillescottatwork Jun 25, 2024
bc2dc32
Added Slurm troubleshooting
johnmc-ucdavis Jun 25, 2024
122674d
formatting corrections
Jun 25, 2024
d3e2f99
[admin] Add a Warning admonition on each admin page telling users tha…
camillescott Jun 25, 2024
91ff672
Merge pull request #21 from ucdavis/camille/general/software
camillescottatwork Jun 25, 2024
0335fa7
[admin] Add a Warning admonition on each admin page telling users tha…
camillescott Jun 25, 2024
5a956ee
Merge branch 'camille/admin-header' of github.com:ucdavis/hpccf-docs …
camillescott Jun 25, 2024
a23ca15
[org] fix confusing section indexes; anchor some orphan pages; add me…
camillescott Jun 25, 2024
9fd8aa7
Some more nav reorg; give farm and general unit logos
camillescott Jun 25, 2024
17449bb
Replace polyfill.io CDN use due to supply chain attack
camillescott Jun 26, 2024
0bf48bf
Merge pull request #33 from ucdavis/polyfill-security-fix
camillescottatwork Jun 26, 2024
22782cc
[nav] tweaks
camillescott Jun 26, 2024
60e9aba
Merge pull request #31 from ucdavis/camille/nav-org
camillescottatwork Jun 26, 2024
a16f202
Merge branch 'main' into sam-changes
camillescott Jun 26, 2024
e7b64a7
[netbox] a bit of cleanup
camillescott Jun 26, 2024
3216d07
Merge pull request #23 from ucdavis/sam-changes
camillescottatwork Jun 26, 2024
4a741fd
[admin] Add a Warning admonition on each admin page telling users tha…
camillescott Jun 25, 2024
27f83c3
merge
camillescott Jun 26, 2024
9e2d797
[admin] about page
camillescott Jun 26, 2024
bef27db
Merge pull request #29 from ucdavis/camille/admin-header
camillescottatwork Jun 26, 2024
a1390f3
Added general troubleshooting page
Jun 24, 2024
0f7ac6e
Made suggested changes
Jun 24, 2024
e7c7a54
Made suggested changes again
johnmc-ucdavis Jun 24, 2024
bae3e44
Made suggested changes again
johnmc-ucdavis Jun 24, 2024
3ae02a7
Made suggested changes once more
johnmc-ucdavis Jun 24, 2024
8c82f4b
Added more details
johnmc-ucdavis Jun 24, 2024
2e664ab
Added Kerberos stuff
johnmc-ucdavis Jun 24, 2024
8cf242b
Consistency tweak
johnmc-ucdavis Jun 25, 2024
9fc6d10
Added Slurm troubleshooting
johnmc-ucdavis Jun 25, 2024
a45ab3c
Add information about Partitions and Accounts to Scheduler::Resources…
camillescott Jun 26, 2024
0c58fd3
conflict resolution
camillescott Jun 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
* text=auto

1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ __pycache__
site
spack-ucdavis
venv
*.DS_Store
5 changes: 4 additions & 1 deletion docs/admin/configuration.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Configuration Management
---
template: admin.html
title: Configuration Management
---

ie: puppet
13 changes: 13 additions & 0 deletions docs/admin/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
title: About
---

This section is for HPCCF admins to document our internal infrastructure, processes, and
architectures.
Although the information may be of _interest_ to end users, it is not designed or maintained for
their consumption; nothing written here should be confused as an offering of service.
For example, although we describe our [Virtual Machine](vms.md) infrastructure, which we used for
hosting a variety of production-essential services for our clusters, we do **not** offer VM hosting
for end users.


5 changes: 4 additions & 1 deletion docs/admin/monitoring.md
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
# Monitoring
---
template: admin.html
title: Monitoring
---
61 changes: 60 additions & 1 deletion docs/admin/netbox.md
Original file line number Diff line number Diff line change
@@ -1 +1,60 @@
# Netbox
---
template: admin.html
title: Netbox
---

[HPCCF's Netbox Site](https://netbox.hpc.ucdavis.edu/dcim/sites/) is our source of truth for our
rack layouts, network addressing, and other infrastructure. NetBox is an infrastructure resource modeling (IRM) application designed to empower network automation. NetBox was developed specifically to address the needs of network and infrastructure engineers.

## Netbox Features

- Comprehensive Data Model
- Focused Development
- Extensible and Customizable
- Flexible Permissions
- Custom Validation & Protection Rules
- Device Configuration Rendering
- Custom Scripts
- Automated Events
- Comprehensive Change Logging

## Netbox Administration

This section will give an overview of how HPCCF admins utilize and administer Netbox.

### How to add assets into Netbox

1. Navigate to HPCCF's Netbox instance here: [HPCCF's Netbox Site](https://netbox.hpc.ucdavis.edu/dcim/sites/) ![netbox1](../img/netbox1.jpeg)

2. Select the site to which you will be adding an asset too. In this example I have chosen Campus DC: ![netbox2](../img/netbox2.jpeg)

3. Scroll down to the bottom of this page and select which of the locations you will add your asset too, here I chose the Storage Cabinet: ![netbox3](../img/netbox3.jpeg) ![netbox4](../img/netbox4.jpeg)

4. On this page scroll to the bottom and select Add a Device: ![netbox5](../img/netbox5.jpeg)

5. After you have selected Add a Device you should see a page like this: ![netbox6](../img/netbox6.jpeg)

6. Fill out this page with specifics of the asset, some fields are not required but try to fill out this section as much as possible with the fields available, here is an example of a created asset and how it should look: ![netbox7](../img/netbox7.jpeg)![netbox8](../img/netbox8.jpeg)![netbox9](../img/netbox9.jpeg)![netbox10](../img/netbox10.jpeg)![netbox11](../img/netbox11.jpeg)![netbox12](../img/netbox12.jpeg)![netbox13](../img/netbox13.jpeg)

7. Ensure to click on Save to have the device added.

### How to add components to an Asset

1. On the asset page select the + Add Components dropdown and select the component you wish to add, for this I have chosen a Console Port: ![netbox14](../img/netbox14.jpeg)

2. Here again you will fill out the dropdowns as thoroughly as possible, the example here is of an interface that has already been added: ![netbox15](../img/netbox15.jpeg)![netbox16](../img/netbox16.jpeg)![netbox17](../img/netbox17.jpeg)![netbox18](../img/netbox18.jpeg)

3. Again make sure to click Save to ensure the component has been added.

4. This process can be used to add all of the following componentes to a device:![netbox19](../img/netbox19.jpeg)

### How to connect components

1. After a component has been created such as an interface, power port or any other type of component you will want to connect it to something. For any component the process is similar within Netbox. In this example it will show how to connect an Infiniban port on a device to a port on an Infiniban switch. First navigate to the device you wish to work with and select the appropriate tab, in this case it will be Interfaces and you will see a page like this: ![netbox20](../img/netbox20.jpeg)

2. Here we will connect ib1 to an infiniban switch by clicking the green dropdown off to the right of ib1 and we will be connecting to another interface on the infiniban switch so we will choose interface as shown here: ![netbox21](../img/netbox21.jpeg)

3. Once selected you will come to a screen that looks like this: ![netbox22](../img/netbox22.jpeg)![netbox23](../img/netbox23.jpeg)![netbox24](../img/netbox24.jpeg)

4. Once all filled out with the required information to complete the connection (and any additional information that can be provided) at the bottom make sure to create the connection, your screen should look something like this: ![netbox25](../img/netbox25.jpeg)![netbox26](../img/netbox26.jpeg)![netbox27](../img/netbox27.jpeg)

5 changes: 4 additions & 1 deletion docs/admin/network.md
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
# Network Architecture
---
template: admin.html
title: Network
---
5 changes: 4 additions & 1 deletion docs/admin/provisioning.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Provisioning
---
template: admin.html
title: Provisioning
---

(cobbler, etc)
5 changes: 4 additions & 1 deletion docs/admin/software.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Software
---
template: admin.html
title: Software Deployment
---

## Spack

Expand Down
5 changes: 4 additions & 1 deletion docs/admin/vms.md
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
# Virtual Machines
---
template: admin.html
title: Virtual Machines
---
Binary file modified docs/assets/HPC-unit-signature.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/HPCCF-logo-dark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/HPCCF-logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/UCDavis_CAES_logo_RGB_vector.eps
Binary file not shown.
256 changes: 256 additions & 0 deletions docs/assets/UCDavis_CAES_logo_RGB_vector.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/hpccf-logo-square.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 3 additions & 1 deletion docs/data-transfer.md
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
# Data Transfer
---
title: Data Transfer
---
81 changes: 81 additions & 0 deletions docs/farm/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@

# Farm

![CBS unit signature](../assets/UCDavis_CAES_logo_RGB_vector.svg){ width="400" align="right" }



Farm is a Linux-based supercomputing cluster for the [College of Agricultural and Environmental Sciences](https://caes.ucdavis.edu/) at UC Davis.
Designed for both research and teaching, it is a significant campus resource primarily for CPU and RAM-based
computing, with a wide selection of centrally-managed software available for research in genetics, proteomics, and
related bioinformatics pipelines, weather and environmental modeling, fluid and particle simulations, geographic
information system (GIS) software, and more.

For buying in resources in Farm cluster, contact CEAS IT director Adam
Getchell - <[email protected]>

## Farm Hardware

Farm is an evolving cluster that changes and grows to meet the current needs of researchers, and has undergone three
phases, with Farm III as the most recent evolution.

Farm III consists of 32 parallel nodes with up to 64 CPUs and
256GB RAM each in low2/med2/high2, plus 17 “bigmem” nodes with up to 96 CPUs and 1TB RAM each in the bml/bmm/bmh
queue. All Farm III bigmem and newer parallel nodes and storage are on EDR/100Gbit interconnects. Older parallel nodes
and storage are on FDR/55Gbit.

Farm II consists of 95 parallel nodes with 24 CPUs and 64GB RAM each in low/med/high,
plus 9 “bigmem” nodes with 64 CPUs and 512GB RAM each in the bigmeml/bigmemm/bigmemh queues, and 1 additional node
with 96 CPUs and 1TB RAM. Farm II nodes are on QDR/32Gbit interconnects.

Hardware from both Farm II and Farm III are
still in service; Farm I has been decommissioned as of 2014.

Farm also has multiple file servers with over 5.3PB of storage space in total.

## Access to Farm

All researchers in CA&ES are entitled to free access to:

- 8 nodes with 24 CPUs and 64GB RAM each (up to a maximum of 192 CPUs and 512GB RAM) in Farm II’s low, medium, and high-priority batch queues,

- 4 nodes with 352 CPUs and 768GB RAM each
in Farm III's low2, med2, and high2-priority batch queues.

- The bml (bigmem, low priority/requeue) partition, which has
24 nodes with a combined 60 TB of RAM.

In addition to this, each new user is allocated a 20GB home directory. If you
want to use the CA&ES free tier, select “CA&ES free tier" from the list of sponsors [here](https://hippo.ucdavis.edu/Farm/myaccount).

Additional usage and access
may be purchased by contributing to Farm III by through the node and/or storage rates or by purchasing equipment and
contributing through the rack fee rate.

Contributors always receive priority access to the resources that they have
purchased within one minute with the “one-minute guarantee.” Users can also request additional unused resources on a
“fair share” basis in the medium or low partitions.

## Farm Administration

Farm hardware and software are administrated by the [HPC Core Facility Team](https://hpc.ucdavis.edu/people).

### Current Rates

As of October 2023, the rates for Farm III:

Node and Storage Rates (each buy-in guarantees access for 5 years): -

- **Parallel (CPU) node:** $13,500 (512 GB RAM, 128 cores/256 threads, 2 TB /scratch)
- **Bigmem node:** 128 cores/256 threads,
2TB RAM (bml, bmm, bmh partitions) - $25,000
- **GPU:** $19,500 1/4 of a GPU node (A100 with 80GB GPU RAM, 16 CPU cores /
32 threads, 256GB system RAM)
- **Storage:** $100/TB with compression for 5 years (does not include backups)

### Bring your own equipment:

Equipment may be purchased directly by researchers based on actual cost. Equipment quote available upon request.

- Equipment purchases not using above rates - $375/year per rack unit for five years For more information about buying
into Farm, contact [Adam Getchell]([email protected]) or the [Helpdesk]([email protected]).
27 changes: 27 additions & 0 deletions docs/farm/resources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
## The Farm cluster Resources

Sponsor - CAES
Information about Partitions - what is low, med and high and what are available GPUs on Farm?

Free tier access to 20GB capped storage.

Low partition - Internittent access to idle resources abbove limit

Medium Partition - Shared use of idle resources above permitted limit

High Partition - Dedicated use of invested resource.

CPU threads - 15,384

GPU count - 29

Aggregated RAM - 66 TB

Maximum RAM per node - 2TB

Node Count - 202

Inter-Connect - 200Gbps

Total number of Users - 726/328

3 changes: 3 additions & 0 deletions docs/franklin/resources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
title: Resources
---
3 changes: 3 additions & 0 deletions docs/franklin/scheduling.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
---
title: Job Scheduling
---

## Partitions

Expand Down
6 changes: 3 additions & 3 deletions docs/franklin/software/cryoem.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ If you are completely unfamiliar with Relion, you should start with the [tutoria

!!! Note
Because Relion is GUI driven, you need to `ssh` to Franklin with X11 forwarding enabled.
Instructions for enabling X11 forwarding can be found in the [Access](../general/access.md#x11-forwarding) section.
Instructions for enabling X11 forwarding can be found in the [Access](../../general/access.md#x11-forwarding) section.

### Launching Relion

Expand All @@ -34,7 +34,7 @@ Currently Loaded Modules Matching: relion
Change your working directory your Relion project directory and type `relion`.
The Relion GUI should then pop up locally.
There will be a bit of latency when using it, especially if you are off campus.
You may be able to reduce latency by [enabling SSH compression](../general/access.md#x11-forwarding).
You may be able to reduce latency by [enabling SSH compression](../../general/access.md#x11-forwarding).

<figure markdown>
![The Relion start screen.](../../img/relion_start_gui.png)
Expand Down Expand Up @@ -84,7 +84,7 @@ The default GUI fields serve their original purposes:

- **Number of MPI procs**: This will fill the Slurm `--ntasks` parameter. These tasks may be distributed across multiple nodes, depending on the number of **Threads** requested. For GPU runs, this should be the number of GPUs **+ 1**.
- **Number of Threads**: The will fill the Slurm `--cpus-per-task` parameter, which means it is the *number of threads per MPI proc*. Some job types do not expose this field, as they can only be run with a single-thread per MPI proc.
- **Queue name**: The Slurm partition to submit to, filling the `--partition` parameter. More information on partitions can be found in the [**Queueing**](../../scheduler/queues.md) section.
- **Queue name**: The Slurm partition to submit to, filling the `--partition` parameter. More information on partitions can be found in the [**Queueing**](../resources.md) section.
- **Standard submission script**: The location of the Slurm job script template that will be used. This field will be filled with the appropriate template for the loaded Relion module by default, and should not be changed.*For advanced users only:* if you are familiar with Relion and want to further fine-tune your Slurm scripts, you can write your own based on the provided templates found in `/share/apps/spack/templates/hpccf/franklin` or [in our spack GitHub repo](https://github.com/ucdavis/spack-ucdavis/tree/main/templates/hpccf/franklin).
- **Minimum dedicated cores per node**: Unused on our system.

Expand Down
4 changes: 4 additions & 0 deletions docs/franklin/storage.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
title: Storage
---

## Home Directories

All users are allocated 20GB of storage for their home directory.
Expand Down
42 changes: 40 additions & 2 deletions docs/general/access.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,37 @@
In order to access your HPC account, you may need to generate an SSH key pair for authorization. You generate a pair of keys: a public key and a private key.
The private key is kept securely on your computer or device.
The public key is uploaded to the servers or systems that you want to access securely via SSH.

## How do I generate an SSH key pair?

### Windows Operating System

We recommend MobaXterm as the most straightforward SSH client. You can download its free home edition (Installer Edition) from https://mobaxterm.mobatek.net/
The Mobaxterm Portable Edition is not recommended as it deletes the sessions. Once you install the stable version of MobaXterm, open its terminal and enter this command:


`ssh-keygen`

This command will create a private key and a public key. Do not share your private key; we recommend giving it a passphrase for security.
To view the .ssh directory and to read the public key, enter these commands:

```
ls -al ~/.ssh
more ~/.ssh/*.pub
```

### MACOS:

Use a terminal to create an SSH key pair using the command:

`ssh-keygen`

To view the .ssh directory and to read the public key, enter these commands:

```
ls -al ~/.ssh
more ~/.ssh/*.pub
```

## X11 Forwarding

Expand All @@ -11,15 +45,19 @@ If you are SSHing from a Linux distribution, you likely already have an X11 serv
If you are on campus, you can use the `-Y` flag to enable it, like:

```bash
$ ssh -Y [USER]@franklin.hpc.ucdavis.edu
$ ssh -Y [USER]@[CLUSTER].hpc.ucdavis.edu
```

If you are off campus on a slower internet connection, you may get better performance by enabling compression with:

```bash
$ ssh -Y [USER]@franklin.hpc.ucdavis.edu
$ ssh -Y [USER]@[CLUSTER].hpc.ucdavis.edu
```
If you have multiple SSH key pairs, and you want to use a specific private key to connect to the clusters, use the otpion `-i` to specify path to the private key with SSH:-

```bash
$ ssh -i /path/to/private/key [USER]@[CLUSTER].hpc.ucdavis.edu
```
### MacOS

MacOS does not come with an X11 implementation out of the box.
Expand Down
28 changes: 22 additions & 6 deletions docs/general/account-requests.md
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,13 +1,29 @@
# Requesting an Account

## Choosing a Cluster
HPC accounts are provisioned on a per-cluster basis and granted with the permission of their principal investigator. Accounts that are provisioned under each PI will have access to that PI's purchased resources and their own separate home directory.

Information on departments and uses for the major clusters
Access to HPC clusters is granted via the use of SSH keys. An SSH public key is required to generate an account. For information on creating SSH keys, please visit the [access](https://docs.hpc.ucdavis.edu/general/access/) documentation page.

## HiPPO
### HiPPO

How to hippo for farm, franklin, peloton
The High-Performance Personnel Onboarding (HiPPO) portal can provision resources for the Farm, Franklin, and Peloton HPC clusters. Users can request an account on [HiPPO](https://hippo.ucdavis.edu) by logging in with UC Davis CAS and selecting their PI.

## Other
Users who do not have a PI and are interested in sponsored tiers for Farm can request an account by selecting the IT director for CAES, Adam Getchell, as their PI.

For non-HiPPO clusters, please see their cluster-specific accounts section.
Users who do not have a PI and who are affiliated with the College of Letters and Science can request an sponsored account on Peloton by selecting the IT director for CLAS, Jeremy Phillips as their PI.


### HPC1 and HPC2

Users who are associated with PI's in the College of Engineering can request accounts on [HPC1](https://wiki.cse.ucdavis.edu/cgi-bin/engr.pl) and [HPC2](https://hpc.ucdavis.edu/form/account-request-form) by going to the appropriate web form.

### LSSC0 (Barbera)
Users who want access to resources on LSSC0 can request an account within the Genome Center Computing [Portal](https://computing.genomecenter.ucdavis.edu/) and selecting 'Request an Account' with their PI.

### Atomate

Atomate accounts can be requested [here](https://wiki.cse.ucdavis.edu/cgi-bin/atomate.pl).

### Cardio, Demon, Impact

Accounts on these systems can be requested [here](https://wiki.cse.ucdavis.edu/cgi-bin/index2.pl).
Empty file removed docs/general/index.md
Empty file.
Loading
Loading