TensorFlow¶
Description¶
TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. TensorFlow also includes TensorBoard, a data visualization toolkit.
This is an experimental CUDA-11 module for DGX-A100 based on an NGC container: 20.06-tf2-py3-ngc.sif
Environment Modules¶
Run module spider tensorflow
to find out what environment modules are available for this application.
Environment Variables¶
Additional Usage Information¶
As of version 2.0, Keras is packaged with TensorFlow as the tensorflow.keras
module. This is the module that you should use. Previously, Keras was developed and distributed separately from TensorFlow; see keras vs. tf.keras for details.
To use TensorFlow with a GPU or GPUs on HiPerGator, you must request the --gpus
or --gpus-per-task
resource and specify the gpu partition in your job script or on the command line as described in the GPU Access Help Page.
For example, to start an interactive session with access to a single GPU, you might run the following command.
srun --partition=gpu --gpus=1 --ntasks=1 --mem=4gb --time=08:00:00 --pty bash -i
Job Script Examples¶
To help you get started, here is an example SLURM script for running a Python TensorFlow application on a single GPU on HiPerGator. If you are new to writing SLURM scripts and scheduling jobs, you will want to first read our SLURM Help Page and SLURM Sample Scripts Page. For information about using GPUs on HiPerGator, please see our GPU Access Page.
Tip
Lines beginning with #SBATCH are instructions to the SLURM scheduler. Lines beginning with # are comments to help you understand the script; feel free to delete them if you adapt this script for your own use.
#!/bin/sh
# The job name: you can choose whatever you want for this.
#SBATCH --job-name=my_tensorflow_job
# Your email address and the events for which you want to receive email
# notification (NONE, BEGIN, END, FAIL, ALL).
#SBATCH --mail-user=username@ufl.edu
#SBATCH --mail-type=ALL
# The compute configuration for the job. For a job that uses GPUs, the
# partition must be set to "gpu". This example script requests access
# to a single GPU, 16 CPUs, and 30 GB of RAM for a single PyTorch task.
#SBATCH --nodes=1
#SBATCH --partition=gpu
#SBATCH --ntasks=1
#SBATCH --gpus-per-task=1
#SBATCH --cpus-per-task=16
#SBATCH --mem=30gb
# Specifies how long the job will be allowed to run in HH:MM:SS.
#SBATCH --time=05:05:05
# The log file for all job output. Note the special string "%j", which
# represents the job number.
#SBATCH --output=job_output_%j.out
# Prints the working directory, name of the assigned node, and
# date/time at the top of the output log.
pwd; hostname; date
module load tensorflow/2.7
# This should be the command you would use if you were running your TensorFlow application from the terminal.
python my_tensorflow_script.py
date
Citation¶
If you publish research that uses tensorflow you must cite it as follows:
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jozefowicz, R., Jia, Y., Kaiser, Ł., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D. G., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., & Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. https://doi.org/10.48550/arXiv.1603.04467
Categories¶
library, math