Apps faq
Back to FAQ
Open OnDemand¶
Q: Why is my Open OnDemand session failing?
A: If you're encountering issues while using OOD, please visit our Open OnDemand Troubleshooting page.
Python¶
Q: Installed a python package via 'pip install PACKAGEX', but 'import PACKAGEX' results in an error.
A: A pip install you performed puts the resulting package into your personal directory tree located in the \~/.local/lib/pythonX.Y/site-packages directory tree. A personal pip install can often result in an installation of a python package from a binary archive (wheel) that was built on a system against software libraries that are not compatible with HiPerGator. A typical error message in such case complains about the lack of a particular GLIBC version or some other missing library. Note that the issue can be exacerbated by an incompatible interaction between an environment loaded via an environment module ('module load something') and a personal python package install. To avoid this issue the python package must be installed into an isolated environment. Our approach for creating such environments depends on many factors, but usually results in a Conda or containerized environment.
Custom Installation¶
Q: I want to have a custom install of an application or python modules.
A: We recommend creating a Conda environment and installing needed packages with the 'mamba' tool from the conda environment module. It is possible to mix conda and pip installed packages inside a conda environment as conda/mamba is aware of packages installed via pip, but not vice versa.
See also: Installing Personal Python Modules and Managing Conda Environments
R¶
Q: How do I install R packages?
A: Users can install R packages in their local directory. The default directory is /home/my.username/R/x86_64-pc-linux-gnu-library/X.X/ (X.X = version number)
From a standard repository (such as CRAN-R)
|
From github
|
From a tarball
|
Q: When I submit a job using 'parallel' package all threads seem to share a single CPU core instead of running on the separate cores I requested.
A: On SLURM you need to use --cpus-per-task to specify the number of available cores. E.g.
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=12
will allow mcapply or other function from the 'parallel' package to run on all requested cores
Jupyter¶
Q: Why do I see the following error message? (kernel).ipynb appears to have died. It will restart automatically.
A: This is typically caused by the kernel using more RAM than what was requested when starting the session. Increase your memory request.
Q: Why am I not able to spawn a Jupyter session?
A: One common cause for being unable to login to a Jupyter (JupyterHub or Jupyter Notebook) session is running out of home space quota. See the FAQ item above for "No Space Left".
Another reason is packages conflicting while loading the session. In this case, it is necessary to look for errors in the output and check for packages from the user's local environment listed.