GPU shared usage
Yotam Hechtlinger
yhechtli at andrew.cmu.edu
Mon May 14 17:56:15 EDT 2018
Hello Everyone,
GPU's are super busy right now with all of them taken. Some of the GPU's
seem to be locked in but not actually running stuff, so if there is some
process you can release that will be great.
Also Tensorflow by default would lock all available GPU's on the server
whenever you call it. If you don't intentionally intend to run stuff on
several GPUs this will restrict your code only to a single one:
*import osos.environ["CUDA_VISIBLE_DEVICES"]="2"*
If you don't need all the memory on the GPU and don't mind other people
sharing the card with you, this will dynamically allocate only the amount
of needed memory:
*import tensorflow as tfconfig =
tf.ConfigProto()config.gpu_options.allow_growth=Truesess =
tf.Session(config=config)*
We're all pretty busy with deadlines. Please be considerate, and try to
avoid grabbing several cards if you don't really need it.
Thanks a lot,
Yotam.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/autonlab-users/attachments/20180514/ceaf9bd1/attachment.html>
More information about the Autonlab-users
mailing list