<div dir="ltr"><div>I don't think it would be a bad thing to have both versions of cuda installed and default to 8.0. To use 7.5 for matlab you probably just have to write a wrapper script to set LD_LIBRARY_FLAGS appropriately.</div></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Oct 21, 2016 at 9:21 PM Kirthevasan Kandasamy <<a href="mailto:kandasamy@cmu.edu">kandasamy@cmu.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr" class="gmail_msg">Hi all,<div class="gmail_msg"><br class="gmail_msg"></div><div class="gmail_msg">I was planning on using Matlab with GPUs for one of my projects.</div><div class="gmail_msg">Can we please keep gpu2 as it is for now?</div><div class="gmail_msg"><br class="gmail_msg"></div><div class="gmail_msg">samy</div></div><div class="gmail_extra gmail_msg"><br class="gmail_msg"><div class="gmail_quote gmail_msg">On Fri, Oct 21, 2016 at 3:54 PM, Barnabas Poczos <span dir="ltr" class="gmail_msg"><<a href="mailto:bapoczos@cs.cmu.edu" class="gmail_msg" target="_blank">bapoczos@cs.cmu.edu</a>></span> wrote:<br class="gmail_msg"><blockquote class="gmail_quote gmail_msg" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sounds good. Let us have tensorflow system wide on all GPU nodes. We<br class="gmail_msg">
can worry about Matlab later.<br class="gmail_msg">
<br class="gmail_msg">
Best,<br class="gmail_msg">
B<br class="gmail_msg">
<span class="m_3828301640574875414im m_3828301640574875414HOEnZb gmail_msg">======================<br class="gmail_msg">
Barnabas Poczos, PhD<br class="gmail_msg">
Assistant Professor<br class="gmail_msg">
Machine Learning Department<br class="gmail_msg">
Carnegie Mellon University<br class="gmail_msg">
<br class="gmail_msg">
<br class="gmail_msg">
</span><div class="m_3828301640574875414HOEnZb gmail_msg"><div class="m_3828301640574875414h5 gmail_msg">On Fri, Oct 21, 2016 at 3:50 PM, Predrag Punosevac <<a href="mailto:predragp@cs.cmu.edu" class="gmail_msg" target="_blank">predragp@cs.cmu.edu</a>> wrote:<br class="gmail_msg">
> Barnabas Poczos <<a href="mailto:bapoczos@cs.cmu.edu" class="gmail_msg" target="_blank">bapoczos@cs.cmu.edu</a>> wrote:<br class="gmail_msg">
><br class="gmail_msg">
>> Hi Predrag,<br class="gmail_msg">
>><br class="gmail_msg">
>> If there is no other solution, then I think it is OK not to have<br class="gmail_msg">
>> Matlab on GPU2 and GPU3.<br class="gmail_msg">
>> Tensorflow has higher priority on these nodes.<br class="gmail_msg">
><br class="gmail_msg">
> We could possibly have multiple CUDA libraries for different versions<br class="gmail_msg">
> but that is going to bite us for the rear end quickly. People who want<br class="gmail_msg">
> to use MATLAB with GPUs will have to live with GPU1 probably until<br class="gmail_msg">
> Spring release of MATLAB.<br class="gmail_msg">
><br class="gmail_msg">
> Predrag<br class="gmail_msg">
><br class="gmail_msg">
>><br class="gmail_msg">
>> Best,<br class="gmail_msg">
>> Barnabas<br class="gmail_msg">
>><br class="gmail_msg">
>><br class="gmail_msg">
>><br class="gmail_msg">
>><br class="gmail_msg">
>> ======================<br class="gmail_msg">
>> Barnabas Poczos, PhD<br class="gmail_msg">
>> Assistant Professor<br class="gmail_msg">
>> Machine Learning Department<br class="gmail_msg">
>> Carnegie Mellon University<br class="gmail_msg">
>><br class="gmail_msg">
>><br class="gmail_msg">
>> On Fri, Oct 21, 2016 at 3:37 PM, Predrag Punosevac <<a href="mailto:predragp@cs.cmu.edu" class="gmail_msg" target="_blank">predragp@cs.cmu.edu</a>> wrote:<br class="gmail_msg">
>> > Dougal Sutherland <<a href="mailto:dougal@gmail.com" class="gmail_msg" target="_blank">dougal@gmail.com</a>> wrote:<br class="gmail_msg">
>> ><br class="gmail_msg">
>> ><br class="gmail_msg">
>> > Sorry that I am late for the party. This is my interpretation of what we<br class="gmail_msg">
>> > should do.<br class="gmail_msg">
>> ><br class="gmail_msg">
>> > 1. I will go back to CUDA 8.0 which will brake MATLAB. We have to live<br class="gmail_msg">
>> > with it. Barnabas please OK this. I will work with MathWorks for this to<br class="gmail_msg">
>> > be fixed for 2017a release.<br class="gmail_msg">
>> ><br class="gmail_msg">
>> > 2. Then I could install TensorFlow compiled by Dougal system wide.<br class="gmail_msg">
>> > Please Dugal after I upgrade back to 8.0 recompile it again using CUDA<br class="gmail_msg">
>> > 8.0. I could give you the root password so that you can compile and<br class="gmail_msg">
>> > install directly.<br class="gmail_msg">
>> ><br class="gmail_msg">
>> > 3. If everyone is OK with above I will pull the trigger on GPU3 at<br class="gmail_msg">
>> > 4:30PM and upgrade to 8.0<br class="gmail_msg">
>> ><br class="gmail_msg">
>> > 4. MATLAB will be broken on GPU2 as well after I put Titan cards during<br class="gmail_msg">
>> > the October 25 power outrage.<br class="gmail_msg">
>> ><br class="gmail_msg">
>> > Predrag<br class="gmail_msg">
>> ><br class="gmail_msg">
>> ><br class="gmail_msg">
>> ><br class="gmail_msg">
>> ><br class="gmail_msg">
>> ><br class="gmail_msg">
>> ><br class="gmail_msg">
>> >> Heh. :)<br class="gmail_msg">
>> >><br class="gmail_msg">
>> >> An explanation:<br class="gmail_msg">
>> >><br class="gmail_msg">
>> >> - Different nvidia gpu architectures are called "compute capabilities".<br class="gmail_msg">
>> >> This is a number that describes the behavior of the card: the maximum size<br class="gmail_msg">
>> >> of various things, which API functions it supports, etc. There's a<br class="gmail_msg">
>> >> reference here<br class="gmail_msg">
>> >> <<a href="https://en.wikipedia.org/wiki/CUDA#Version_features_and_specifications" rel="noreferrer" class="gmail_msg" target="_blank">https://en.wikipedia.org/wiki/CUDA#Version_features_and_specifications</a>>,<br class="gmail_msg">
>> >> but it shouldn't really matter.<br class="gmail_msg">
>> >> - When CUDA compiles code, it targets a certain architecture, since it<br class="gmail_msg">
>> >> needs to know what features to use and whatnot. I *think* that if you<br class="gmail_msg">
>> >> compile for compute capability x, it will work on a card with compute<br class="gmail_msg">
>> >> capability y approximately iff x <= y.<br class="gmail_msg">
>> >> - Pascal Titan Xs, like gpu3 has, have compute capability 6.1.<br class="gmail_msg">
>> >> - CUDA 7.5 doesn't know about compute capability 6.1, so if you ask to<br class="gmail_msg">
>> >> compile for 6.1 it crashes.<br class="gmail_msg">
>> >> - Theano by default tries to compile for the capability of the card, but<br class="gmail_msg">
>> >> can be configured to compile for a different capability.<br class="gmail_msg">
>> >> - Tensorflow asks for a list of capabilities to compile for when you<br class="gmail_msg">
>> >> build it in the first place.<br class="gmail_msg">
>> >><br class="gmail_msg">
>> >><br class="gmail_msg">
>> >> On Fri, Oct 21, 2016 at 8:17 PM Dougal Sutherland <<a href="mailto:dougal@gmail.com" class="gmail_msg" target="_blank">dougal@gmail.com</a>> wrote:<br class="gmail_msg">
>> >><br class="gmail_msg">
>> >> > They do work with 7.5 if you specify an older compute architecture; it's<br class="gmail_msg">
>> >> > just that their actual compute capability of 6.1 isn't supported by cuda<br class="gmail_msg">
>> >> > 7.5. Thank is thrown off by this, for example, but it can be fixed by<br class="gmail_msg">
>> >> > telling it to pass compute capability 5.2 (for example) to nvcc. I don't<br class="gmail_msg">
>> >> > think that this was my problem with building tensorflow on 7.5; I'm not<br class="gmail_msg">
>> >> > sure what that was.<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > On Fri, Oct 21, 2016, 8:11 PM Kirthevasan Kandasamy <<a href="mailto:kandasamy@cmu.edu" class="gmail_msg" target="_blank">kandasamy@cmu.edu</a>><br class="gmail_msg">
>> >> > wrote:<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > Thanks Dougal. I'll take a look atthis and get back to you.<br class="gmail_msg">
>> >> > So are you suggesting that this is an issue with TitanX's not being<br class="gmail_msg">
>> >> > compatible with 7.5?<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > On Fri, Oct 21, 2016 at 3:08 PM, Dougal Sutherland <<a href="mailto:dougal@gmail.com" class="gmail_msg" target="_blank">dougal@gmail.com</a>><br class="gmail_msg">
>> >> > wrote:<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > I installed it in my scratch directory (not sure if there's a global<br class="gmail_msg">
>> >> > install?). The main thing was to put its cache on scratch; it got really<br class="gmail_msg">
>> >> > upset when the cache directory was on NFS. (Instructions at the bottom of<br class="gmail_msg">
>> >> > my previous email.)<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > On Fri, Oct 21, 2016, 8:04 PM Barnabas Poczos <<a href="mailto:bapoczos@cs.cmu.edu" class="gmail_msg" target="_blank">bapoczos@cs.cmu.edu</a>> wrote:<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > That's great! Thanks Dougal.<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > As I remember bazel was not installed correctly previously on GPU3. Do<br class="gmail_msg">
>> >> > you know what went wrong with it before and why it is good now?<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > Thanks,<br class="gmail_msg">
>> >> > Barnabas<br class="gmail_msg">
>> >> > ======================<br class="gmail_msg">
>> >> > Barnabas Poczos, PhD<br class="gmail_msg">
>> >> > Assistant Professor<br class="gmail_msg">
>> >> > Machine Learning Department<br class="gmail_msg">
>> >> > Carnegie Mellon University<br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> > On Fri, Oct 21, 2016 at 2:03 PM, Dougal Sutherland <<a href="mailto:dougal@gmail.com" class="gmail_msg" target="_blank">dougal@gmail.com</a>><br class="gmail_msg">
>> >> > wrote:<br class="gmail_msg">
>> >> > > I was just able to build tensorflow 0.11.0rc0 on gpu3! I used the cuda<br class="gmail_msg">
>> >> > 8.0<br class="gmail_msg">
>> >> > > install, and it built fine. So additionally installing 7.5 was probably<br class="gmail_msg">
>> >> > not<br class="gmail_msg">
>> >> > > necessary; in fact, cuda 7.5 doesn't know about the 6.1 compute<br class="gmail_msg">
>> >> > architecture<br class="gmail_msg">
>> >> > > that the Titan Xs use, so Theano at least needs to be manually told to<br class="gmail_msg">
>> >> > use<br class="gmail_msg">
>> >> > > an older architecture.<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > A pip package is in ~dsutherl/tensorflow-0.11.0rc0-py2-none-any.whl. I<br class="gmail_msg">
>> >> > think<br class="gmail_msg">
>> >> > > it should work fine with the cudnn in my scratch directory.<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > You should probably install it to scratch, either running this first to<br class="gmail_msg">
>> >> > put<br class="gmail_msg">
>> >> > > libraries your scratch directory or using a virtualenv or something:<br class="gmail_msg">
>> >> > > export PYTHONUSERBASE=/home/scratch/$USER/.local<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > You'll need this to use the library and probably to install it:<br class="gmail_msg">
>> >> > > export<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > LD_LIBRARY_PATH=/home/scratch/dsutherl/cudnn-8.0-5.1/cuda/lib64:"$LD_LIBRARY_PATH"<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > To install:<br class="gmail_msg">
>> >> > > pip install --user ~dsutherl/tensorflow-0.11.0rc0-py2-none-any.whl<br class="gmail_msg">
>> >> > > (remove --user if you're using a virtualenv)<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > (A request: I'm submitting to ICLR in two weeks, and for some of the<br class="gmail_msg">
>> >> > models<br class="gmail_msg">
>> >> > > I'm running gpu3's cards are 4x the speed of gpu1 or 2's. So please don't<br class="gmail_msg">
>> >> > > run a ton of stuff on gpu3 unless you're working on a deadline too.<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > Steps to install it, for the future:<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > Install bazel in your home directory:<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > wget<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > <a href="https://github.com/bazelbuild/bazel/releases/download/0.3.2/bazel-0.3.2-installer-linux-x86_64.sh" rel="noreferrer" class="gmail_msg" target="_blank">https://github.com/bazelbuild/bazel/releases/download/0.3.2/bazel-0.3.2-installer-linux-x86_64.sh</a><br class="gmail_msg">
>> >> > > bash <a href="http://bazel-0.3.2-installer-linux-x86_64.sh" rel="noreferrer" class="gmail_msg" target="_blank">bazel-0.3.2-installer-linux-x86_64.sh</a> --prefix=/home/scratch/$USER<br class="gmail_msg">
>> >> > > --base=/home/scratch/$USER/.bazel<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > Configure bazel to build in scratch. There's probably a better way to do<br class="gmail_msg">
>> >> > > this, but this works:<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > mkdir /home/scratch/$USER/.cache<br class="gmail_msg">
>> >> > > ln -s /home/scratch/$USER/.cache/bazel ~/.cache/bazel<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > Build tensorflow. Note that builds from git checkouts don't work, because<br class="gmail_msg">
>> >> > > they assume a newer version of git than is on gpu3:<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > cd /home/scratch/$USER<br class="gmail_msg">
>> >> > > wget<br class="gmail_msg">
>> >> > > tar xf<br class="gmail_msg">
>> >> > > cd tensorflow-0.11.0rc0<br class="gmail_msg">
>> >> > > ./configure<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > This is an interactive script that doesn't seem to let you pass<br class="gmail_msg">
>> >> > arguments or<br class="gmail_msg">
>> >> > > anything. It's obnoxious.<br class="gmail_msg">
>> >> > > Use the default python<br class="gmail_msg">
>> >> > > don't use cloud platform or hadoop file system<br class="gmail_msg">
>> >> > > use the default site-packages path if it asks<br class="gmail_msg">
>> >> > > build with GPU support<br class="gmail_msg">
>> >> > > default gcc<br class="gmail_msg">
>> >> > > default Cuda SDK version<br class="gmail_msg">
>> >> > > specify /usr/local/cuda-8.0<br class="gmail_msg">
>> >> > > default cudnn version<br class="gmail_msg">
>> >> > > specify $CUDNN_DIR from use-cudnn.sh, e.g.<br class="gmail_msg">
>> >> > > /home/scratch/dsutherl/cudnn-8.0-5.1/cuda<br class="gmail_msg">
>> >> > > Pascal Titan Xs have compute capability 6.1<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > bazel build -c opt --config=cuda<br class="gmail_msg">
>> >> > > //tensorflow/tools/pip_package:build_pip_package<br class="gmail_msg">
>> >> > > bazel-bin/tensorflow/tools/pip_package/build_pip_package ./<br class="gmail_msg">
>> >> > > A .whl file, e.g. tensorflow-0.11.0rc0-py2-none-any.whl, is put in the<br class="gmail_msg">
>> >> > > directory you specified above.<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > - Dougal<br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > On Fri, Oct 21, 2016 at 6:14 PM Kirthevasan Kandasamy <<a href="mailto:kandasamy@cmu.edu" class="gmail_msg" target="_blank">kandasamy@cmu.edu</a><br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > > wrote:<br class="gmail_msg">
>> >> > >><br class="gmail_msg">
>> >> > >> Predrag,<br class="gmail_msg">
>> >> > >><br class="gmail_msg">
>> >> > >> Any updates on gpu3?<br class="gmail_msg">
>> >> > >> I have tried both tensorflow and chainer and in both cases the problem<br class="gmail_msg">
>> >> > >> seems to be with cuda<br class="gmail_msg">
>> >> > >><br class="gmail_msg">
>> >> > >> On Wed, Oct 19, 2016 at 4:10 PM, Predrag Punosevac <<a href="mailto:predragp@cs.cmu.edu" class="gmail_msg" target="_blank">predragp@cs.cmu.edu</a><br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> > >> wrote:<br class="gmail_msg">
>> >> > >>><br class="gmail_msg">
>> >> > >>> Dougal Sutherland <<a href="mailto:dougal@gmail.com" class="gmail_msg" target="_blank">dougal@gmail.com</a>> wrote:<br class="gmail_msg">
>> >> > >>><br class="gmail_msg">
>> >> > >>> > I tried for a while. I failed.<br class="gmail_msg">
>> >> > >>> ><br class="gmail_msg">
>> >> > >>><br class="gmail_msg">
>> >> > >>> Damn this doesn't look good. I guess back to the drawing board. Thanks<br class="gmail_msg">
>> >> > >>> for the quick feed back.<br class="gmail_msg">
>> >> > >>><br class="gmail_msg">
>> >> > >>> Predrag<br class="gmail_msg">
>> >> > >>><br class="gmail_msg">
>> >> > >>> > Version 0.10.0 fails immediately on build: "The specified<br class="gmail_msg">
>> >> > >>> > --crosstool_top<br class="gmail_msg">
>> >> > >>> > '@local_config_cuda//crosstool:crosstool' is not a valid<br class="gmail_msg">
>> >> > >>> > cc_toolchain_suite<br class="gmail_msg">
>> >> > >>> > rule." Apparently this is because 0.10 required an older version of<br class="gmail_msg">
>> >> > >>> > bazel (<br class="gmail_msg">
>> >> > >>> > <a href="https://github.com/tensorflow/tensorflow/issues/4368" rel="noreferrer" class="gmail_msg" target="_blank">https://github.com/tensorflow/tensorflow/issues/4368</a>), and I don't<br class="gmail_msg">
>> >> > have<br class="gmail_msg">
>> >> > >>> > the<br class="gmail_msg">
>> >> > >>> > energy to install an old version of bazel.<br class="gmail_msg">
>> >> > >>> ><br class="gmail_msg">
>> >> > >>> > Version 0.11.0rc0 gets almost done and then complains about no such<br class="gmail_msg">
>> >> > >>> > file or<br class="gmail_msg">
>> >> > >>> > directory for libcudart.so.7.5 (which is there, where I told<br class="gmail_msg">
>> >> > tensorflow<br class="gmail_msg">
>> >> > >>> > it<br class="gmail_msg">
>> >> > >>> > was...).<br class="gmail_msg">
>> >> > >>> ><br class="gmail_msg">
>> >> > >>> > Non-release versions from git fail immediately because they call git<br class="gmail_msg">
>> >> > -C<br class="gmail_msg">
>> >> > >>> > to<br class="gmail_msg">
>> >> > >>> > get version info, which is only in git 1.9 (we have 1.8).<br class="gmail_msg">
>> >> > >>> ><br class="gmail_msg">
>> >> > >>> ><br class="gmail_msg">
>> >> > >>> > Some other notes:<br class="gmail_msg">
>> >> > >>> > - I made a symlink from ~/.cache/bazel to<br class="gmail_msg">
>> >> > >>> > /home/scratch/$USER/.cache/bazel,<br class="gmail_msg">
>> >> > >>> > because bazel is the worst. (It complains about doing things on NFS,<br class="gmail_msg">
>> >> > >>> > and<br class="gmail_msg">
>> >> > >>> > hung for me [clock-related?], and I can't find a global config file<br class="gmail_msg">
>> >> > or<br class="gmail_msg">
>> >> > >>> > anything to change that in; it seems like there might be one, but<br class="gmail_msg">
>> >> > their<br class="gmail_msg">
>> >> > >>> > documentation is terrible.)<br class="gmail_msg">
>> >> > >>> ><br class="gmail_msg">
>> >> > >>> > - I wasn't able to use the actual Titan X compute capability of 6.1,<br class="gmail_msg">
>> >> > >>> > because that requires cuda 8; I used 5.2 instead. Probably not a huge<br class="gmail_msg">
>> >> > >>> > deal,<br class="gmail_msg">
>> >> > >>> > but I don't know.<br class="gmail_msg">
>> >> > >>> ><br class="gmail_msg">
>> >> > >>> > - I tried explicitly including /usr/local/cuda/lib64 in<br class="gmail_msg">
>> >> > LD_LIBRARY_PATH<br class="gmail_msg">
>> >> > >>> > and<br class="gmail_msg">
>> >> > >>> > set CUDA_HOME to /usr/local/cuda before building, hoping that would<br class="gmail_msg">
>> >> > >>> > help<br class="gmail_msg">
>> >> > >>> > with the 0.11.0rc0 problem, but it didn't.<br class="gmail_msg">
>> >> > >><br class="gmail_msg">
>> >> > >><br class="gmail_msg">
>> >> > ><br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> ><br class="gmail_msg">
>> >> ><br class="gmail_msg">
</div></div></blockquote></div><br class="gmail_msg"></div>
</blockquote></div>