From chiragn at cs.cmu.edu Tue May 1 12:53:19 2018 From: chiragn at cs.cmu.edu (Chirag Nagpal) Date: Tue, 1 May 2018 12:53:19 -0400 Subject: torch broken on lov5 Message-ID: Hi All This maybe useful for people who like to use pytorch on cpus. it seems as the torch version that was installed with anaconda on lov5 has some packaging issue. importing torch seems to fail. $python -c "import torch" (should/might)? throw an error. a quick workaround for me was to reinstall torch using conda with soumith's branch $conda install pytorch -c soumith Chirag -- *Chirag Nagpal* Graduate Student, Language Technologies Institute School of Computer Science Carnegie Mellon University cs.cmu.edu/~chiragn -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Fri May 4 12:30:12 2018 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Fri, 04 May 2018 12:30:12 -0400 Subject: autonlab.org is down In-Reply-To: References: Message-ID: <20180504163012.icEFXH8ES%predragp@andrew.cmu.edu> Donghan Wang wrote: > Hi Predrag, > > The lab website appears to be down. Could you take a look? I did already. It appears that something is wrong with PHP interpreter or that somebody hacked into the DokuWiki itself. Everything appears to be OK but even locally DokuWiki is not loading properly. I just e-mailed Simon. https://www.autonlab.org/ipe https://www.autonlab.org/sdss are unaffected which shows you that proxy server is OK. I am waiting on Simon before I do something more drastic like rolling back with the ZFS snapshot or switching to HUGO which is still not 100% ready. Predrag > > Thanks, > Jarod From predragp at andrew.cmu.edu Fri May 4 13:39:33 2018 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Fri, 04 May 2018 13:39:33 -0400 Subject: www.autonlab.org fixed! In-Reply-To: References: <20180504162620.tSgwtpB2N%predragp@andrew.cmu.edu> Message-ID: <20180504173933.KHny3Lg3b%predragp@andrew.cmu.edu> Simon Heath wrote: > root at lweb # tail /var/log/access.log > 192.168.10.254 - - [04/May/2018:12:36:21 -0400] "GET / HTTP/1.0" 500 0 "-" > "Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0" > Great clue!!! This was broken by FreeBSD dokuwiki or php56 port maintainer. The simple pkg install php56-json fixed. Apparently somebody removed php56-json as DokuWiki dependency. Thanks Simon! Predrag > root at lweb # tail -f /var/log/error.log > 2018/05/04 12:35:57 [error] 56082#100810: *200 FastCGI sent in stderr: "PHP > message: PHP Fatal error: Call to undefined function hash_equals() in > /usr/local/www/dokuwiki/inc/PassHash.class.php on line 94" while reading > response header from upstream, client: 192.168.10.254, server: > lweb.dmz.autonlab.org, request: "GET / HTTP/1.0", upstream: > "fastcgi://unix:/var/run/php-fpm.sock:", host: "www.autonlab.org" > > Function it's not finding: http://php.net/manual/en/function.hash-equals.php > > Minimum version 5.6.0, it should be there but it isn't: > > root at lweb:/usr/local/etc/nginx # php > > PHP Fatal error: Call to undefined function hash_equals() in - on line 1 > root at lweb:/usr/local/etc/nginx # php --version > PHP 5.6.35 (cli) (built: Apr 10 2018 01:14:01) > Copyright (c) 1997-2016 The PHP Group > Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies > > Looks like `hash_equals` is part of the Hash package, which technically an > extension, though it says it's compiled into php by default and (maybe?) > can't be left out. > > root at lweb:/usr/local/etc # php -i | grep -i extension_dir > extension_dir => /usr/local/lib/php/20131226 => /usr/local/lib/php/20131226 > > I don't know how you compiled php for this, since it doesn't seem to be in > /usr/ports. But looking at the Makefile in /usr/ports/lang/php56 in the > host directory, it implies the "hash" extension isn't installed by default > (maybe). Running "php -m" prints out the modules it has but doesn't list > "hash". ("json" apparently is also necessary.) > > This makes no sense, but it appears to be what's going on. Hopefully we > just have to compile php with the right options? > > Simon > > On Fri, May 4, 2018 at 12:31 PM, Simon Heath wrote: > > > Ok, I'll check it out. > > > > On Fri, May 4, 2018 at 12:26 PM, Predrag Punosevac < > > predragp at andrew.cmu.edu> wrote: > > > >> Hi Simon, > >> > >> I tried to reach https://www.autonlab.org. The website is not loading. > >> There is nothing wrong with the proxy server as you can check by loading > >> > >> https://www.autonlab.org/ipe > >> > >> or > >> > >> https://www.autonlab.org/sdss/ > >> > >> I went to the warden.dmz.autonlab.org (jail host which runs > >> lweb.dmz.autonlab.org). Host is OK. Nothing wrong with jails as well > >> > >> root at warden:~ # iocell list > >> JID UUID BOOT STATE TAG > >> TYPE IP4 RELEASE > >> 6 600ec005-17f4-11e8-a693-0cc47aab0128 on up > >> hugo.dmz.autonlab.org basejail 192.168.10.93 11.1-RELEASE > >> 8 b4b45c86-c4ed-11e7-b09c-0cc47a6baaa0 on up > >> lweb.dmz.autonlab.org basejail 192.168.10.91 11.1-RELEASE > >> > >> > >> I logged into the lweb.dmz.autonlab.org > >> > >> iocell console lweb.dmz.autonlab.org > >> > >> root at lweb:~ # pwd > >> /root > >> > >> nginx and php-fpm is running normally. Something is either broken with > >> php interpreter or dokuwiki itself which also appears OK. > >> > >> root at lweb:/usr/local/www/dokuwiki # ls > >> .htaccess.dist bin findbadphp.php > >> COPYING conf inc > >> README conf-original index.php > >> README-Predrag.txt data install.php > >> VERSION data-original lib > >> _media doku.php vendor > >> auton.png feed.php > >> > >> > >> I am running here out of ideas and my brain is clouded by the lack of > >> sleep. Can you have a quick look. This is super urgent. > >> > >> Predrag > >> > > > > > > > > -- > > Simon Heath, Research Programmer and Analyst > > Robotics Institute - Auton Lab > > Carnegie Mellon University > > sheath at andrew.cmu.edu > > > > > > -- > Simon Heath, Research Programmer and Analyst > Robotics Institute - Auton Lab > Carnegie Mellon University > sheath at andrew.cmu.edu From donghanw at cs.cmu.edu Fri May 4 14:00:26 2018 From: donghanw at cs.cmu.edu (Donghan Wang) Date: Fri, 4 May 2018 14:00:26 -0400 Subject: www.autonlab.org fixed! In-Reply-To: <20180504173933.KHny3Lg3b%predragp@andrew.cmu.edu> References: <20180504162620.tSgwtpB2N%predragp@andrew.cmu.edu> <20180504173933.KHny3Lg3b%predragp@andrew.cmu.edu> Message-ID: Thank you, Predrag and Simon, for your prompt reply in resolving the issue! Thanks, Jarod On Fri, May 4, 2018 at 1:39 PM, Predrag Punosevac wrote: > Simon Heath wrote: > > > root at lweb # tail /var/log/access.log > > 192.168.10.254 - - [04/May/2018:12:36:21 -0400] "GET / HTTP/1.0" 500 0 > "-" > > "Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0" > > > > Great clue!!! This was broken by FreeBSD dokuwiki or php56 port > maintainer. The simple > > pkg install php56-json > > fixed. Apparently somebody removed php56-json as DokuWiki dependency. > > Thanks Simon! > > Predrag > > > > root at lweb # tail -f /var/log/error.log > > 2018/05/04 12:35:57 [error] 56082#100810: *200 FastCGI sent in stderr: > "PHP > > message: PHP Fatal error: Call to undefined function hash_equals() in > > /usr/local/www/dokuwiki/inc/PassHash.class.php on line 94" while reading > > response header from upstream, client: 192.168.10.254, server: > > lweb.dmz.autonlab.org, request: "GET / HTTP/1.0", upstream: > > "fastcgi://unix:/var/run/php-fpm.sock:", host: "www.autonlab.org" > > > > Function it's not finding: http://php.net/manual/en/ > function.hash-equals.php > > > > Minimum version 5.6.0, it should be there but it isn't: > > > > root at lweb:/usr/local/etc/nginx # php > > > > PHP Fatal error: Call to undefined function hash_equals() in - on line 1 > > root at lweb:/usr/local/etc/nginx # php --version > > PHP 5.6.35 (cli) (built: Apr 10 2018 01:14:01) > > Copyright (c) 1997-2016 The PHP Group > > Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies > > > > Looks like `hash_equals` is part of the Hash package, which technically > an > > extension, though it says it's compiled into php by default and (maybe?) > > can't be left out. > > > > root at lweb:/usr/local/etc # php -i | grep -i extension_dir > > extension_dir => /usr/local/lib/php/20131226 => > /usr/local/lib/php/20131226 > > > > I don't know how you compiled php for this, since it doesn't seem to be > in > > /usr/ports. But looking at the Makefile in /usr/ports/lang/php56 in the > > host directory, it implies the "hash" extension isn't installed by > default > > (maybe). Running "php -m" prints out the modules it has but doesn't list > > "hash". ("json" apparently is also necessary.) > > > > This makes no sense, but it appears to be what's going on. Hopefully we > > just have to compile php with the right options? > > > > Simon > > > > On Fri, May 4, 2018 at 12:31 PM, Simon Heath > wrote: > > > > > Ok, I'll check it out. > > > > > > On Fri, May 4, 2018 at 12:26 PM, Predrag Punosevac < > > > predragp at andrew.cmu.edu> wrote: > > > > > >> Hi Simon, > > >> > > >> I tried to reach https://www.autonlab.org. The website is not > loading. > > >> There is nothing wrong with the proxy server as you can check by > loading > > >> > > >> https://www.autonlab.org/ipe > > >> > > >> or > > >> > > >> https://www.autonlab.org/sdss/ > > >> > > >> I went to the warden.dmz.autonlab.org (jail host which runs > > >> lweb.dmz.autonlab.org). Host is OK. Nothing wrong with jails as well > > >> > > >> root at warden:~ # iocell list > > >> JID UUID BOOT STATE TAG > > >> TYPE IP4 RELEASE > > >> 6 600ec005-17f4-11e8-a693-0cc47aab0128 on up > > >> hugo.dmz.autonlab.org basejail 192.168.10.93 11.1-RELEASE > > >> 8 b4b45c86-c4ed-11e7-b09c-0cc47a6baaa0 on up > > >> lweb.dmz.autonlab.org basejail 192.168.10.91 11.1-RELEASE > > >> > > >> > > >> I logged into the lweb.dmz.autonlab.org > > >> > > >> iocell console lweb.dmz.autonlab.org > > >> > > >> root at lweb:~ # pwd > > >> /root > > >> > > >> nginx and php-fpm is running normally. Something is either broken with > > >> php interpreter or dokuwiki itself which also appears OK. > > >> > > >> root at lweb:/usr/local/www/dokuwiki # ls > > >> .htaccess.dist bin findbadphp.php > > >> COPYING conf inc > > >> README conf-original index.php > > >> README-Predrag.txt data install.php > > >> VERSION data-original lib > > >> _media doku.php vendor > > >> auton.png feed.php > > >> > > >> > > >> I am running here out of ideas and my brain is clouded by the lack of > > >> sleep. Can you have a quick look. This is super urgent. > > >> > > >> Predrag > > >> > > > > > > > > > > > > -- > > > Simon Heath, Research Programmer and Analyst > > > Robotics Institute - Auton Lab > > > Carnegie Mellon University > > > sheath at andrew.cmu.edu > > > > > > > > > > > -- > > Simon Heath, Research Programmer and Analyst > > Robotics Institute - Auton Lab > > Carnegie Mellon University > > sheath at andrew.cmu.edu > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yz6 at andrew.cmu.edu Tue May 8 12:44:30 2018 From: yz6 at andrew.cmu.edu (Yang Zhang) Date: Tue, 8 May 2018 12:44:30 -0400 Subject: Git Lfs Message-ID: <9498B9A8-86CC-4EE2-926C-13D0AADB5024@andrew.cmu.edu> Hi everybody, Is git Lfs installed somewhere on autolab? If not, would it be possible to install this? Thanks! Best, Yang From predragp at andrew.cmu.edu Tue May 8 14:59:49 2018 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Tue, 08 May 2018 14:59:49 -0400 Subject: Git Lfs In-Reply-To: <9498B9A8-86CC-4EE2-926C-13D0AADB5024@andrew.cmu.edu> References: <9498B9A8-86CC-4EE2-926C-13D0AADB5024@andrew.cmu.edu> Message-ID: <20180508185949.ZjH2Rv_Rm%predragp@andrew.cmu.edu> Yang Zhang wrote: > Hi everybody, > > Is git Lfs installed somewhere on autolab? > If not, would it be possible to install this? > Thanks! This is the first time I have heard of git-lfs. In the Auton Lab we recommend people to use rh-git29 which can be found in /opt/rh/rh-git29/root/bin as oppose to the older default version which is the dependency for R. I don't see RPM which provides rh-git29-lfs http://puias.princeton.edu/data/puias/SCL/7.5/x86_64/ which means that the only way to install is from sources. https://git-lfs.github.com/ Since git lfs command line extension has to be set up per user I don't see any benefit of me compiling this but one of our scientific programmers will hopefully correct me if I am wrong. Cheers, Predrag > > Best, > Yang From donghanw at cs.cmu.edu Tue May 8 15:56:21 2018 From: donghanw at cs.cmu.edu (Donghan Wang) Date: Tue, 8 May 2018 15:56:21 -0400 Subject: Git Lfs In-Reply-To: <20180508185949.ZjH2Rv_Rm%predragp@andrew.cmu.edu> References: <9498B9A8-86CC-4EE2-926C-13D0AADB5024@andrew.cmu.edu> <20180508185949.ZjH2Rv_Rm%predragp@andrew.cmu.edu> Message-ID: Predrag and Yang, I installed git-lfs using yum via PackageCloud. curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash # which creates /etc/yum.repos.d/github_git-lfs.repo yum install git-lfs References: https://packagecloud.io/github/git-lfs/install#bash-rpm Thanks, Jarod On Tue, May 8, 2018 at 2:59 PM, Predrag Punosevac wrote: > Yang Zhang wrote: > > > Hi everybody, > > > > Is git Lfs installed somewhere on autolab? > > If not, would it be possible to install this? > > Thanks! > > This is the first time I have heard of git-lfs. In the Auton Lab we > recommend people to use rh-git29 which can be found in > > /opt/rh/rh-git29/root/bin > > as oppose to the older default version which is the dependency for R. > I don't see RPM which provides rh-git29-lfs > > http://puias.princeton.edu/data/puias/SCL/7.5/x86_64/ > > which means that the only way to install is from sources. > > https://git-lfs.github.com/ > > Since git lfs command line extension has to be set up per user I don't > see any benefit of me compiling this but one of our scientific > programmers will hopefully correct me if I am wrong. > > Cheers, > Predrag > > > > > Best, > > Yang > -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Wed May 9 22:37:40 2018 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Wed, 09 May 2018 22:37:40 -0400 Subject: Git Lfs In-Reply-To: References: <9498B9A8-86CC-4EE2-926C-13D0AADB5024@andrew.cmu.edu> <20180508185949.ZjH2Rv_Rm%predragp@andrew.cmu.edu> Message-ID: <20180510023740.M3E1ARgo_%predragp@andrew.cmu.edu> Donghan Wang wrote: > Predrag and Yang, > > I installed git-lfs using yum via PackageCloud. > > curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh > | sudo bash > That is so wrong. Randomly adding RPM repositories is the quickest way to break your Red Hat system > # which creates /etc/yum.repos.d/github_git-lfs.repo > > yum install git-lfs A little more digging reveals that git-lfs is the part of the official EPEL repository. I have installed git-lfs on all our computing nodes. It seems to be working well with git 2.9 predrag at lake$ more .kshrc|grep git alias git='/opt/rh/rh-git29/root/bin/git' predrag at lake$ git lfs install Git LFS initialized. Cheers, Predrag > > > References: https://packagecloud.io/github/git-lfs/install#bash-rpm > > Thanks, > Jarod > > On Tue, May 8, 2018 at 2:59 PM, Predrag Punosevac > wrote: > > > Yang Zhang wrote: > > > > > Hi everybody, > > > > > > Is git Lfs installed somewhere on autolab? > > > If not, would it be possible to install this? > > > Thanks! > > > > This is the first time I have heard of git-lfs. In the Auton Lab we > > recommend people to use rh-git29 which can be found in > > > > /opt/rh/rh-git29/root/bin > > > > as oppose to the older default version which is the dependency for R. > > I don't see RPM which provides rh-git29-lfs > > > > http://puias.princeton.edu/data/puias/SCL/7.5/x86_64/ > > > > which means that the only way to install is from sources. > > > > https://git-lfs.github.com/ > > > > Since git lfs command line extension has to be set up per user I don't > > see any benefit of me compiling this but one of our scientific > > programmers will hopefully correct me if I am wrong. > > > > Cheers, > > Predrag > > > > > > > > Best, > > > Yang > > From awd at cs.cmu.edu Fri May 11 14:55:34 2018 From: awd at cs.cmu.edu (Artur Dubrawski) Date: Fri, 11 May 2018 14:55:34 -0400 Subject: Fwd: Thesis Defense - 5/25/18 - Junier Oliva - Distribution and Histogram (DisH) Learning In-Reply-To: <5a4bdba4-227c-2d1c-5c55-b9a038cbc288@cs.cmu.edu> References: <5a4bdba4-227c-2d1c-5c55-b9a038cbc288@cs.cmu.edu> Message-ID: This may be the last chance to see Junier as a student! Artur ---------- Forwarded message ---------- From: Diane Stidle Date: Fri, May 11, 2018 at 2:29 PM Subject: Thesis Defense - 5/25/18 - Junier Oliva - Distribution and Histogram (DisH) Learning To: "ml-seminar at cs.cmu.edu" , Le Song < lsong at cc.gatech.edu> Thesis Defense Date: May 25, 2018 Time: 10:00am Place: 8102 GHC PhD Candidate: Junier Oliva Title: Distribution and Histogram (DisH) Learning Abstract: Machine learning has made incredible advances in the last couple of decades. Notwithstanding, a lot of this progress has been limited to basic point-estimation tasks. That is, a large bulk of attention has been geared at solving problems that take in a static finite vector and map it to another static finite vector. However, we do not navigate through life in a series of point-estimation problems, mapping x to y. Instead, we find broad patterns and gather a far-sighted understanding of data by considering collections of points like sets, sequences, and distributions. Thus, contrary to what various billionaires, celebrity theoretical physicists, and sci-fi classics would lead you to believe, true machine intelligence is fairly out of reach currently. In order to bridge this gap, we have developed algorithms that understand data at an aggregate, holistic level. This thesis pushes machine learning past the realm of operating over static finite vectors, to start reasoning ubiquitously with complex, dynamic collections like sets and sequences. We develop algorithms that consider distributions as functional covariates/responses, and methods that use distributions as internal representations. We consider distributions since they are a straightforward characterization of many natural phenomena and provide a richer description than simple point data by detailing information at an aggregate level. Our approach may be seen as addressing two sides of the same coin: on one side, we use traditional machine learning algorithms adjusted to directly operate on inputs and outputs that are probability functions (and sample sets); on the other side, we develop better estimators for traditional tasks by making use of and adjusting internal distributions. We begin by developing algorithms for traditional machine learning tasks for the cases when one?s input (and/or possibly output) is not a finite point, but is instead a distribution, or sample set drawn from a distribution. We develop a scalable nonparametric estimator for regressing a real valued response given an input that is a distribution, a case which we coin distribution to real regression (DRR). Furthermore, we extend this work to the case when both the output response and the input covariate are distributions; a task we call distribution to distribution regression (DDR). After, we look to expand the versatility and efficacy of traditional machine learning tasks through novel methods that operate with distributions of features. For example, we show that one may improve the performance of kernel learning tasks by learning a kernel?s spectral distribution in a data-driven fashion using Bayesian nonparametric techniques. Moreover, we study how to perform sequential modeling by looking at summary statistics from past points. Lastly, we also develop methods for high-dimensional density estimation that make use of flexible transformations of variables and autoregressive conditionals. Thesis Committee: Barnabas Poczos (Co-Chair) Jeff Schneider (Co-Chair) Ruslan Salakhutdinov Le Song (Georgia Institute of Technology, lsong at cc.gatech.edu) Link to draft document: https://www.dropbox.com/s/z93s3qanl02fs8l/draft.pdf?dl=0 -- Diane Stidle Graduate Programs Manager Machine Learning Department Carnegie Mellon Universitydiane at cs.cmu.edu 412-268-1299 -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Sun May 13 22:31:04 2018 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Sun, 13 May 2018 22:31:04 -0400 Subject: Main file server Message-ID: <20180514023104.cB0b6y7N_%predragp@andrew.cmu.edu> Dear Autonians, Gaia is no more our main file server. I could not keep the main file server any longer due to the lack of storage capacity. I know some of you have May 27 deadline but I run out of options. Currently shell gateways bash.autonlab.org and lop1.autonlab.org are switched to backup file server as well as computing nodes compute-0-0 and compute-0-1 which were used last couple of days as a test bed. I am going manually right now from computing node to computing node and switching things to backup file server. Reboots are highly possible to clear stale file handless. If you are writhing right now output into your home directory you might lose the work between now and new mount (few hours). Predrag P.S. Auton Lab supported desktops will be switched after computing nodes. You will mount data and project datasets back as I have open ports I needed. From predragp at andrew.cmu.edu Sun May 13 22:52:22 2018 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Sun, 13 May 2018 22:52:22 -0400 Subject: Main file server In-Reply-To: <20180514023104.cB0b6y7N_%predragp@andrew.cmu.edu> References: <20180514023104.cB0b6y7N_%predragp@andrew.cmu.edu> Message-ID: <20180514025222.7CiuunAat%predragp@andrew.cmu.edu> Predrag Punosevac wrote: > Dear Autonians, > lov1, lov2, lov3, gpu1, gpu2 are already switched to backup file server. The process did involve reboot. The upshot is that as I am going through machines they are being upgraded. gpu7 was switched to backup file server without reboot due to its special designation. Predrag > Gaia is no more our main file server. I could not keep the main file > server any longer due to the lack of storage capacity. I know some of > you have May 27 deadline but I run out of options. Currently shell > gateways bash.autonlab.org and lop1.autonlab.org are switched to backup > file server as well as computing nodes compute-0-0 and compute-0-1 which > were used last couple of days as a test bed. I am going manually right > now from computing node to computing node and switching things to > backup file server. Reboots are highly possible to clear stale file > handless. If you are writhing right now output into your home directory > you might lose the work between now and new mount (few hours). > > > Predrag > > P.S. Auton Lab supported desktops will be switched after computing > nodes. You will mount data and project datasets back as I have open > ports I needed. From predragp at andrew.cmu.edu Sun May 13 23:28:34 2018 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Sun, 13 May 2018 23:28:34 -0400 Subject: Main file server In-Reply-To: <20180514025222.7CiuunAat%predragp@andrew.cmu.edu> References: <20180514023104.cB0b6y7N_%predragp@andrew.cmu.edu> <20180514025222.7CiuunAat%predragp@andrew.cmu.edu> Message-ID: <20180514032834.qRwtranj0%predragp@andrew.cmu.edu> Predrag Punosevac wrote: > Predrag Punosevac wrote: > > > Dear Autonians, > > > > lov1, lov2, lov3, gpu1, gpu2 are already switched to backup file server. > The process did involve reboot. The upshot is that as I am going through > machines they are being upgraded. > > gpu7 was switched to backup file server without reboot due to its > special designation. Saga is almost over! All servers have been switched now to backup file server. Lou1 which was blinking like a Christmas light since last year didn't survive reboot. I will inspect machine as soon as I have some time (the old file server needs to be rebuild and we have a bunch of interns coming to the Lab). It is 10 year old machine and it is likely dead and not worth fixing. Predrag P.S. I am fixing the desktops right now. I will not bother you with another e-mail tonight. > > > Predrag > > > Gaia is no more our main file server. I could not keep the main file > > server any longer due to the lack of storage capacity. I know some of > > you have May 27 deadline but I run out of options. Currently shell > > gateways bash.autonlab.org and lop1.autonlab.org are switched to backup > > file server as well as computing nodes compute-0-0 and compute-0-1 which > > were used last couple of days as a test bed. I am going manually right > > now from computing node to computing node and switching things to > > backup file server. Reboots are highly possible to clear stale file > > handless. If you are writhing right now output into your home directory > > you might lose the work between now and new mount (few hours). > > > > > > Predrag > > > > P.S. Auton Lab supported desktops will be switched after computing > > nodes. You will mount data and project datasets back as I have open > > ports I needed. From predragp at andrew.cmu.edu Mon May 14 16:07:51 2018 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Mon, 14 May 2018 16:07:51 -0400 Subject: SSH Login Policies Message-ID: <20180514200751.g9E54XOao%predragp@andrew.cmu.edu> Dear Autonians, Effective immediately Auton Lab members who have Auton Lab managed desktop machines will no longer be able to use ssh gateways bash.autonlab.org and lop1.autonlab.org to log into the Lab infrastructure. They will have to use their own desktops as shell gateways. This change of policy currently affects 13 lab members (myself included). The only exception to this rule is if you have a Windows desktop machine which is not the part of our extended VPN network. On the related note the only users who can log into the 13 private desktop machines are machine owners and the admin. This policy has not changed since I implemented it 5 years ago upon my arrival to the Auton Lab. Best, Predrag From yhechtli at andrew.cmu.edu Mon May 14 17:56:15 2018 From: yhechtli at andrew.cmu.edu (Yotam Hechtlinger) Date: Mon, 14 May 2018 17:56:15 -0400 Subject: GPU shared usage Message-ID: Hello Everyone, GPU's are super busy right now with all of them taken. Some of the GPU's seem to be locked in but not actually running stuff, so if there is some process you can release that will be great. Also Tensorflow by default would lock all available GPU's on the server whenever you call it. If you don't intentionally intend to run stuff on several GPUs this will restrict your code only to a single one: *import osos.environ["CUDA_VISIBLE_DEVICES"]="2"* If you don't need all the memory on the GPU and don't mind other people sharing the card with you, this will dynamically allocate only the amount of needed memory: *import tensorflow as tfconfig = tf.ConfigProto()config.gpu_options.allow_growth=Truesess = tf.Session(config=config)* We're all pretty busy with deadlines. Please be considerate, and try to avoid grabbing several cards if you don't really need it. Thanks a lot, Yotam. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boecking at andrew.cmu.edu Mon May 21 08:35:27 2018 From: boecking at andrew.cmu.edu (Benedikt Boecking) Date: Mon, 21 May 2018 08:35:27 -0400 Subject: Decentralized Machine Learning at the Edge (DMLE'18) Message-ID: All, One of our former Auton Lab members Yamuna Krishnamurthy is co-chairing a workshop on Decentralized Machine Learning at ECML 2018. In case you are interested, here is the call for papers: Decentralized Machine Learning at the Edge (DMLE'18) _____________________________________________ Call for Papers Website: dmle.iais.fraunhofer.de Workshop in conjunction with ECMLPKDD 2018 Sep 14, 2018, Dublin, Ireland This workshop aims to foster discussion, discovery, and dissemination of novel ideas and approaches for decentralized machine learning. In order to scale parallel machine learning to very large volumes of data, decentralized machine learning pushes computation towards the edge, that is, towards the data generating devices. By learning models directly on the data sources, network communication can be reduced by orders of magnitude. Moreover, it enables training a central model without centralizing privacy-sensitive data. Submission deadline is July 2nd 2018 We invite submissions of full length (16 pages) and short (8 pages) papers. In addition, we will present a best paper award which includes a certificate and prize: ? Parallel machine learning ? Edge computing for machine learning ? Decentralized deep learning ? Federated learning ? In-situ methods ? Communication-efficient learning ? Privacy aspects of distributed learning ? Black-box machine learning ? Distributed optimization ? Theoretical investigations on parallelization ? Large-scale machine learning, massive data sets ?? Distributed data mining -------------- next part -------------- An HTML attachment was scrubbed... URL: From awd at cs.cmu.edu Fri May 25 08:57:21 2018 From: awd at cs.cmu.edu (Artur Dubrawski) Date: Fri, 25 May 2018 08:57:21 -0400 Subject: Fwd: Reminder - Thesis Defense - 5/25/18 - Junier Oliva - Distribution and Histogram (DisH) Learning In-Reply-To: <0ab2ca13-2f68-2577-80d3-8dcf21ddb642@cs.cmu.edu> References: <0ab2ca13-2f68-2577-80d3-8dcf21ddb642@cs.cmu.edu> Message-ID: Team, If you can, come see Junier becoming a doctor. It is today at 10 in Gates 8102. Cheers Artur ---------- Forwarded message ---------- From: Diane Stidle Date: Thu, May 24, 2018 at 3:29 PM Subject: Reminder - Thesis Defense - 5/25/18 - Junier Oliva - Distribution and Histogram (DisH) Learning To: "ml-seminar at cs.cmu.edu" , Le Song < lsong at cc.gatech.edu> Thesis Defense Date: May 25, 2018 Time: 10:00am Place: 8102 GHC PhD Candidate: Junier Oliva Title: Distribution and Histogram (DisH) Learning Abstract: Machine learning has made incredible advances in the last couple of decades. Notwithstanding, a lot of this progress has been limited to basic point-estimation tasks. That is, a large bulk of attention has been geared at solving problems that take in a static finite vector and map it to another static finite vector. However, we do not navigate through life in a series of point-estimation problems, mapping x to y. Instead, we find broad patterns and gather a far-sighted understanding of data by considering collections of points like sets, sequences, and distributions. Thus, contrary to what various billionaires, celebrity theoretical physicists, and sci-fi classics would lead you to believe, true machine intelligence is fairly out of reach currently. In order to bridge this gap, we have developed algorithms that understand data at an aggregate, holistic level. This thesis pushes machine learning past the realm of operating over static finite vectors, to start reasoning ubiquitously with complex, dynamic collections like sets and sequences. We develop algorithms that consider distributions as functional covariates/responses, and methods that use distributions as internal representations. We consider distributions since they are a straightforward characterization of many natural phenomena and provide a richer description than simple point data by detailing information at an aggregate level. Our approach may be seen as addressing two sides of the same coin: on one side, we use traditional machine learning algorithms adjusted to directly operate on inputs and outputs that are probability functions (and sample sets); on the other side, we develop better estimators for traditional tasks by making use of and adjusting internal distributions. We begin by developing algorithms for traditional machine learning tasks for the cases when one?s input (and/or possibly output) is not a finite point, but is instead a distribution, or sample set drawn from a distribution. We develop a scalable nonparametric estimator for regressing a real valued response given an input that is a distribution, a case which we coin distribution to real regression (DRR). Furthermore, we extend this work to the case when both the output response and the input covariate are distributions; a task we call distribution to distribution regression (DDR). After, we look to expand the versatility and efficacy of traditional machine learning tasks through novel methods that operate with distributions of features. For example, we show that one may improve the performance of kernel learning tasks by learning a kernel?s spectral distribution in a data-driven fashion using Bayesian nonparametric techniques. Moreover, we study how to perform sequential modeling by looking at summary statistics from past points. Lastly, we also develop methods for high-dimensional density estimation that make use of flexible transformations of variables and autoregressive conditionals. Thesis Committee: Barnabas Poczos (Co-Chair) Jeff Schneider (Co-Chair) Ruslan Salakhutdinov Le Song (Georgia Institute of Technology, lsong at cc.gatech.edu) Link to draft document: https://www.dropbox.com/s/z93s3qanl02fs8l/draft.pdf?dl=0 -- Diane Stidle Graduate Programs Manager Machine Learning Department Carnegie Mellon Universitydiane at cs.cmu.edu 412-268-1299 -------------- next part -------------- An HTML attachment was scrubbed... URL: