Tesla V100 GPU nodes

Predrag Punosevac predragp at andrew.cmu.edu
Tue Aug 18 00:36:28 EDT 2020


Dear Autonians,

As we are gearing up for a new school year I would like to remind
everyone that we are sharing finite resources. We are still relying on
our ladies'/gentlemen's agreement 

https://www.autonlab.org/autonlab_wiki/faq.html

rather than on the Slurm scheduler. In a lieu of the fact that we
started acquiring very expensive high GPU memory servers (Tesla V100)
suitable for training of 3D neuronal networks the notable addition to
our don'ts is recommendation that those are not to be used when your
jobs can be run on lower memory GPUs We will be adding both high memory
GPUs as well as lower memory GPUs as the new rack space and electricity
becomes available in incoming weeks.

Most Kind Regards,
Predrag Punosevac


More information about the Autonlab-users mailing list