From ben.boecking at googlemail.com Tue Sep 1 16:29:06 2020 From: ben.boecking at googlemail.com (Ben Boecking) Date: Tue, 1 Sep 2020 15:29:06 -0500 Subject: Auton Lab scratch directories Message-ID: <80E38575-E331-48CC-B86B-C0D8B488D1DB@googlemail.com> Hello everyone, Following up on Chirag?s messages last week, the scratch directories are still 100% full. In particular lov9 has absolutely no space left, making it unusable for certain important tasks. Please, if you have files that you do not need anymore stored in scratch directories, remove them to free up space for your colleagues. If the scratch directories on lov9 and lov8 are still full in 24h we will have to remove everything in them indiscriminately. Best, Ben From predragp at andrew.cmu.edu Tue Sep 1 17:40:57 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Tue, 1 Sep 2020 17:40:57 -0400 Subject: Computational Resources Abuse Message-ID: Dear Autonians, It has been brought to my attention by the Auton Lab director Dr. Dubrawski that many of senior lab members are taken aback and frustrated by the abuse of computational resources and the lack of netiquette by some junior members of our lab. This abuse has resulted in significant productivity loss and will no longer be tolerated. The access to the Auton lab computing resources is a privilege which comes with the responsibilities to the other 131 members of our scientific group. The Lab currently has 132 active accounts which is more than the NREC. At the very least we should be mindful that the other lab members, many of whom you have never met in person, are working as hard as we do on moving the frontier of human knowledge forward. The current ladies'/gentlemen's agreement we had in place for the past 27 years https://www.autonlab.org/autonlab_wiki/aetiquette.html username: auton password: Dr.Who enabled us to manage high productivity and low user's frustration in spite of having one of the most liberal usage policies on the CMU campus. We would like to keep it that way and avoid technical solutions like SLURM workload manager which will significantly raise the entry bar for incoming students. We would also like to avoid heavy-handed policing of lab resources. Both Dr. Dubrawski and I grew up in countries ruled by totalitarian governments. We have personal experience with police states which goes beyond reading 1984, Animal farm, or Archipelago and don't want any resemblance with the Auton Lab experience. As of this very moment, we expect that everyone is very familiar with the above document. In particular sections Auton Lab etiquette https://www.autonlab.org/autonlab_wiki/aetiquette.html username: auton password: Dr.Who Users who are found in violation of don'ts will lose access to the Auton Lab resources until the case is reviewed by the Lab's director and co-director. This account suspension could be made permanent pending the severity of the violation. On a personal note, anybody who is caught holding GPU resources by running fake jobs should start looking for another cluster for their work. I also expect people to clear scratch directories at once. Most Kind Regards, Predag Punoseavc -------------- next part -------------- An HTML attachment was scrubbed... URL: From awd at cs.cmu.edu Tue Sep 1 18:21:56 2020 From: awd at cs.cmu.edu (Artur Dubrawski) Date: Tue, 1 Sep 2020 18:21:56 -0400 Subject: Computational Resources Abuse In-Reply-To: References: Message-ID: Thank you Predrag. Team: I have authorized Predrag to take active measures needed to restore civilized manners of usage of our computing resources. These measures may include termination of processes, removal of data, and escalate to termination of the most offending user accounts. I am quite sad we have to go this way, given that over the past two decades, perhaps even longer, we did not even have to discuss these kinds of issues at all. We have been enjoying a friendly, respectful, civilized culture of using the always scarce resources. But we cannot afford to allow a few rogue processes or users handcuff the rest of the lab anymore. Please let me know if you have any questions or concerns, and whenever in doubt - do not hesitate to contact me, Predrag, or Jeff. Thanks, Artur On Tue, Sep 1, 2020 at 5:41 PM Predrag Punosevac wrote: > Dear Autonians, > > It has been brought to my attention by the Auton Lab director Dr. > Dubrawski that many of senior lab members are taken aback and frustrated by > the abuse of computational resources and the lack of netiquette by some > junior members of our lab. This abuse has resulted in > significant productivity loss and will no longer be tolerated. > > The access to the Auton lab computing resources is a privilege which comes > with the responsibilities to the other 131 members of our scientific group. > The Lab currently has 132 active accounts which is more than the NREC. At > the very least we should be mindful that the other lab members, many of > whom you have never met in person, are working as hard as we do on moving > the frontier of human knowledge forward. The current ladies'/gentlemen's > agreement we had in place for the past 27 years > > https://www.autonlab.org/autonlab_wiki/aetiquette.html > > username: auton > password: Dr.Who > > enabled us to manage high productivity and low user's frustration in spite > of having one of the most liberal usage policies on the CMU campus. > > We would like to keep it that way and avoid technical solutions like SLURM > workload manager which will significantly raise the entry bar for incoming > students. We would also like to avoid heavy-handed policing of lab > resources. Both Dr. Dubrawski and I grew up in countries ruled by > totalitarian governments. We have personal experience with police states > which goes beyond reading 1984, Animal farm, or Archipelago and don't want > any resemblance with the Auton Lab experience. > > As of this very moment, we expect that everyone is very familiar with the > above document. In particular sections Auton Lab etiquette > > https://www.autonlab.org/autonlab_wiki/aetiquette.html > > username: auton > password: Dr.Who > > Users who are found in violation of don'ts will lose access to the Auton > Lab resources until the case is reviewed by the Lab's director and > co-director. This account suspension could be made permanent pending the > severity of the violation. > > On a personal note, anybody who is caught holding GPU resources by running > fake jobs should start looking for another cluster for their work. I also > expect people to clear scratch directories at once. > > Most Kind Regards, > Predag Punoseavc > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Tue Sep 1 18:35:42 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Tue, 1 Sep 2020 18:35:42 -0400 Subject: gpu16 and gpu21 on their knees Message-ID: GPU16 and GPU21 are on their knees. The available resources are overprovisioned and the machines don't have 128 MB of memory left for me to ssh as a root. I could hard reboot GPU16 using IPMI from my home. GPU21 is not connected to the IPMI server due to its current temporary location so that is not an option. The machine room is only crewed 8h a day during the normal business hours so my hands are tight with respect to gpu21. Yesterday I rebooted gpu20 which was unresponsive since Friday night with the very same symptoms. This is not the way to use 40K servers. This is a serious drain on our productivity. Please be reasonable in your expectations what you can get out of machines. Predrag -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Tue Sep 1 19:12:53 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Tue, 1 Sep 2020 19:12:53 -0400 Subject: Upgrade planning Message-ID: Dear Autonians, It has been brought to my attention that the CUDA upgrade from 10.3 to 11.0 didn't work as expected for some people. That is very unfortunate in the lieu of incoming deadlines. The servers are like living beings. They are constantly metamorphosing even if I don't do anything. Sitting for too long on the existing software versions has its advantage that the system is working fully or mostly for 5-6 months but it has an unintended consequence of forcing us to do updates at the least desirable times before conference. I was very reluctant to upgrade from 10.3 to 11.0 but I had to after a multitude of reports about broken TensorFlow. Due to the fact that my training is in pure mathematics, I lack the basic awareness of major conferences and deadlines in the field of Machine Learning. Typically this is not a problem during the noral times as people just walk to my office and tell me to keep my hands on the equipment for the next 2 weeks. These are not normal times so I will have to do a bit better job. I am trying to create a calendar of block out weeks when the changes to their computing infrastructure will be kept to the minimum. I am thinking of 2 solid weeks before any major conference or due deadline. Could you please kindly help Ben and I to do this by filling out the poll he created https://docs.google.com/document/d/13GeOx5IM7zxS2IJjPoJweHFKKw8-PYLjL8Z9vG0G8i0/edit?usp=sharing Best, Predrag -------------- next part -------------- An HTML attachment was scrubbed... URL: From awd at cs.cmu.edu Wed Sep 2 08:49:31 2020 From: awd at cs.cmu.edu (Artur Dubrawski) Date: Wed, 2 Sep 2020 08:49:31 -0400 Subject: Andrew Moore pushing practical utility of AI Message-ID: Team, See this release for some inspiring news: https://siliconangle.com/2020/09/01/google-intros-new-ai-features-tools-advance-mlops/ Jeff: Shouldn't Schenley Park Research LLC be receiving royalties for Vizier? :) Cheers Artur -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngisolfi at cs.cmu.edu Thu Sep 3 10:33:36 2020 From: ngisolfi at cs.cmu.edu (Nick Gisolfi) Date: Thu, 3 Sep 2020 10:33:36 -0400 Subject: [Lunch] Today @noon over Zoom Message-ID: https://cmu.zoom.us/j/492870487 We hope to see you there! - Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Thu Sep 3 15:47:44 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Thu, 03 Sep 2020 15:47:44 -0400 Subject: Unable to access GPU 20 & 21 In-Reply-To: References: Message-ID: <20200903194744.uDvzf%predragp@andrew.cmu.edu> Sarveshwaran Jayaraman wrote: > Hi Predrag, > > > I am unable to ssh into gpu20 & gpu21. Can you please look into it? Thanks! Fixed! sssd daemon died. Welcome to the wonderful world of systemd :-) Predrag > > > [1562005799537] > > Sarvesh Jayaraman > Sr. Research Analyst, Auton Lab > Carnegie Mellon University > Mob: +1-240-893-4287 > > From predragp at andrew.cmu.edu Fri Sep 4 16:53:53 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Fri, 04 Sep 2020 16:53:53 -0400 Subject: gpu22 and gpu23 added to the cluster Message-ID: <20200904205353.5jg2l%predragp@andrew.cmu.edu> Dear Autonians, I just completed provisioning of the two newest additions to our cluster gpu22 and gpu23 recently purchased by Dr. Dubrawski funding. These are monster machines with identical specification to gpu20 and gpu21. CPU cores RAM GPU cards gpu22 40 192 4xTesla V100 SXM2 gpu23 40 192 4xTesla V100 SXM2 CUDA is 11.0. I did add CuDNN library. /opt/miniconda-py38 is Python location. Have fun!!! Predrag From predragp at andrew.cmu.edu Tue Sep 8 17:23:13 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Tue, 08 Sep 2020 17:23:13 -0400 Subject: GPU20 and GPU21 scheduled downtime Message-ID: <20200908212313.RxLaa%predragp@andrew.cmu.edu> Dear Autonians, If you have important upcoming deadlines and can't effort 1h of downtime this Friday on GPU20 and GPU21 please speak now. Namely, these two servers purchased in May of this year and provisioned first week of June are currently located in somebody's else server rack as we had no electricity for them at the time of deployment. As you know, we have the new rack and the power now and these two servers have to move to the same location as GPU22 and GPU23. If everything goes according to my plan I will power off GPU20 and GPU21 for no more than an hour this incoming Friday around noon which is the time I need to physically relocate them. Best, Predrag From predragp at andrew.cmu.edu Tue Sep 8 18:34:40 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Tue, 8 Sep 2020 18:34:40 -0400 Subject: gpu20 and gpu21 run to the ground Message-ID: Dear Autonians, The gpu20 and gpu21 just crashed. They are not even connected to the IPMI so they can't be restarted remotely. That is what I was planning to do on Friday. I am trying to talk to people currently in the machine room who can restart them for us. We have to stop this cycle of overloading machines to the point they die and become useless to everyone. I hope people who do this realize that they just waste their own time as the machines are stateless and work is not recoverable. Benedikt Boecking was kind enough and spent some time creating this little write up which will be included into our Wiki. In incoming weeks I will look into technical solutions which will hopefully Automatic multi processing issue: some libraries automatically use all cores for some of their underlying routines. In particular, this happens with python (numpy,scipy,?). Automatic multiprocessing on the servers can slow your code down and also impact your colleagues. a. Spawning as many threads as the server has cores can be more expensive than just executing the routine on one thread. b. Running your process on all cores will impact the ability of your colleagues to use the server. b. If you already do your own multi-processing, these effects can multiply and you might unwittingly try to parallelize across thousands of threads. Please monitor your resource usage via top/htop and make sure you are not flooding the server with too many parallel jobs. To fix the issue you can: 1. Set environment variables $ export MKL_NUM_THREADS=1 $ export NUMEXPR_NUM_THREADS=1 $ export OMP_NUM_THREADS=1 $ export OPENBLAS_NUM_THREADS =1 $ export VECLIB_MAXIMUM_THREADS =1 2. Set these variables in python before importing any other libraries import os os.environ["OMP_NUM_THREADS"] = "1" os.environ["OPENBLAS_NUM_THREADS"] = "1" os.environ["MKL_NUM_THREADS"] = "1" os.environ["VECLIB_MAXIMUM_THREADS"] = "1" os.environ["NUMEXPR_NUM_THREADS"] = "1" Best, Predrag -------------- next part -------------- An HTML attachment was scrubbed... URL: From boecking at andrew.cmu.edu Tue Sep 8 18:50:51 2020 From: boecking at andrew.cmu.edu (Benedikt Boecking) Date: Tue, 8 Sep 2020 17:50:51 -0500 Subject: gpu20 and gpu21 run to the ground In-Reply-To: References: Message-ID: In addition to always using htop or top to monitor if you are running as many parallel jobs as intended, please please please also keep an eye on your memory usage. Just check right after you start your job and a few times in between to make sure everything is as you intended. Just fyi, when using htop, you can press u and search for your username. If you press t you get the tree structure of your processes which can be helpful. > On Sep 8, 2020, at 5:34 PM, Predrag Punosevac wrote: > > Dear Autonians, > > The gpu20 and gpu21 just crashed. They are not even connected to the IPMI so they can't be restarted remotely. That is what I was planning to do on Friday. I am trying to talk to people currently in the machine room who can restart them for us. > > We have to stop this cycle of overloading machines to the point they die and become useless to everyone. I hope people who do this realize that they just waste their own time as the machines are stateless and work is not recoverable. Benedikt Boecking was kind enough and spent some time creating this little write up which will be included into our Wiki. In incoming weeks I will look into technical solutions which will hopefully > > Automatic multi processing issue: some libraries automatically use all cores for some of their underlying routines. In particular, this happens with python (numpy,scipy,?). Automatic multiprocessing on the servers can slow your code down and also impact your colleagues. > a. Spawning as many threads as the server has cores can be more expensive than just executing the routine on one thread. > b. Running your process on all cores will impact the ability of your colleagues to use the server. > b. If you already do your own multi-processing, these effects can multiply and you might unwittingly try to parallelize across thousands of threads. > > Please monitor your resource usage via top/htop and make sure you are not flooding the server with too many parallel jobs. To fix the issue you can: > 1. Set environment variables > $ export MKL_NUM_THREADS=1 > $ export NUMEXPR_NUM_THREADS=1 > $ export OMP_NUM_THREADS=1 > $ export OPENBLAS_NUM_THREADS =1 > $ export VECLIB_MAXIMUM_THREADS =1 > > 2. Set these variables in python before importing any other libraries > import os > os.environ["OMP_NUM_THREADS"] = "1" > os.environ["OPENBLAS_NUM_THREADS"] = "1" > os.environ["MKL_NUM_THREADS"] = "1" > os.environ["VECLIB_MAXIMUM_THREADS"] = "1" > os.environ["NUMEXPR_NUM_THREADS"] = "1" > > > Best, > Predrag From predragp at andrew.cmu.edu Tue Sep 8 19:47:17 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Tue, 8 Sep 2020 19:47:17 -0400 Subject: gpu20 and gpu21 run to the ground In-Reply-To: References: Message-ID: I own an apology to all of you. Lin from operations went to our servers and they were unplugged from electricity. That has never happened ro me in 7 years at the Auton. Lab. There is no point powering this up tonight. I will drive to CMU tomorrow and move servers to our rack. Predrag On Tue, Sep 8, 2020, 6:34 PM Predrag Punosevac wrote: > Dear Autonians, > > The gpu20 and gpu21 just crashed. They are not even connected to the IPMI > so they can't be restarted remotely. That is what I was planning to do on > Friday. I am trying to talk to people currently in the machine room who can > restart them for us. > > We have to stop this cycle of overloading machines to the point they die > and become useless to everyone. I hope people who do this realize that they > just waste their own time as the machines are stateless and work is not > recoverable. Benedikt Boecking was kind enough and spent some time creating > this little write up which will be included into our Wiki. In incoming > weeks I will look into technical solutions which will hopefully > > Automatic multi processing issue: some libraries automatically use all > cores for some of their underlying routines. In particular, this happens > with python (numpy,scipy,?). Automatic multiprocessing on the servers can > slow your code down and also impact your colleagues. > a. Spawning as many threads as the server has cores can be more expensive > than just executing the routine on one thread. > b. Running your process on all cores will impact the ability of your > colleagues to use the server. > b. If you already do your own multi-processing, these effects can multiply > and you might unwittingly try to parallelize across thousands of threads. > > Please monitor your resource usage via top/htop and make sure you are not > flooding the server with too many parallel jobs. To fix the issue you can: > 1. Set environment variables > $ export MKL_NUM_THREADS=1 > $ export NUMEXPR_NUM_THREADS=1 > $ export OMP_NUM_THREADS=1 > $ export OPENBLAS_NUM_THREADS =1 > $ export VECLIB_MAXIMUM_THREADS =1 > > 2. Set these variables in python before importing any other libraries > import os > os.environ["OMP_NUM_THREADS"] = "1" > os.environ["OPENBLAS_NUM_THREADS"] = "1" > os.environ["MKL_NUM_THREADS"] = "1" > os.environ["VECLIB_MAXIMUM_THREADS"] = "1" > os.environ["NUMEXPR_NUM_THREADS"] = "1" > > > Best, > Predrag > -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Wed Sep 9 15:44:38 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Wed, 09 Sep 2020 15:44:38 -0400 Subject: GPU20 & GPU21 migration completed Message-ID: <20200909194438.6MMwS%predragp@andrew.cmu.edu> Dear Autonians, The servers are now at their permanent place. I am working on connecting them to our IPMI server. Going forward we should not see surprises like the yesterday one. Best, Predrag From yeehos at andrew.cmu.edu Wed Sep 16 12:54:24 2020 From: yeehos at andrew.cmu.edu (Yeeho Song) Date: Wed, 16 Sep 2020 12:54:24 -0400 Subject: LOV6 Full Message-ID: This is a gentle reminder that LOV6 scratch is almost full. Please check and delete / move your files from the scratch directories if possible. Thank you! yeehos at lov6$ df -h /home/scratch Filesystem Size Used Avail Use% Mounted on /dev/mapper/sl_lov6-home 392G 392G 28K 100% /home Sincerely, Yeeho Song -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngisolfi at cs.cmu.edu Thu Sep 17 09:31:42 2020 From: ngisolfi at cs.cmu.edu (Nick Gisolfi) Date: Thu, 17 Sep 2020 09:31:42 -0400 Subject: [Lunch] Today @noon over Zoom Message-ID: https://cmu.zoom.us/j/492870487 We hope to see you there! - Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Tue Sep 22 17:38:02 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Tue, 22 Sep 2020 17:38:02 -0400 Subject: MATLAB 2020b released, license agreement renewal notice Message-ID: Dear Autonians, MATLAB 2020b was released today. In less than 30 days all our installations including the one on your laptops will stop working due to the license expiration. CMU has renewed the licensing agreement with MathWorks and I have MATLAB 2020b running on my laptop including a bunch of toolboxes. Due to the massive size (close to 30GB) it will take me a few weeks to reinstall MATLAB on our servers. If you need it on your laptop just install from MathWorks and use VPN to activate the CMU licensing server. The server is not under my control. Predrag -------------- next part -------------- An HTML attachment was scrubbed... URL: From predragp at andrew.cmu.edu Wed Sep 23 10:03:50 2020 From: predragp at andrew.cmu.edu (Predrag Punosevac) Date: Wed, 23 Sep 2020 10:03:50 -0400 Subject: MATLAB 2020b released, license agreement renewal notice In-Reply-To: <7561F3C3-4A90-484A-B4D4-7D38B7BCF6F3@cmu.edu> References: <7561F3C3-4A90-484A-B4D4-7D38B7BCF6F3@cmu.edu> Message-ID: Actually I am one of CMU license managers although not the chief one. The easiest way normally would be for me just to log in to my MathWorks account from your computer. Let me check the other options and get back to you. It has been a year since I played with this. Predrag On Wed, Sep 23, 2020, 9:39 AM Anthony Wertz wrote: > Worse than that, using the stand-alone license available on CMU?s website > (for 2019b) MATLAB expires in 7 days? and there?s no licensing update > available. >:-/ > > On Mathwork?s website I already have an account and linked the license, > but it doesn?t give me access to 2020b (or any download for that matter). > Any tips on installing it? Otherwise I probably need to find someone at CMU > to complain to. > > > - Anthony > > El sep. 22, 2020, a las 17:38, Predrag Punosevac > escribi?: > > Dear Autonians, > > MATLAB 2020b was released today. In less than 30 days all our > installations including the one on your laptops will stop working due to > the license expiration. CMU has renewed the licensing agreement with > MathWorks and I have MATLAB 2020b running on my laptop including a bunch of > toolboxes. Due to the massive size (close to 30GB) it will take me a few > weeks to reinstall MATLAB on our servers. If you need it on your laptop > just install from MathWorks and use VPN to activate the CMU licensing > server. The server is not under my control. > > Predrag > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arundhat at andrew.cmu.edu Wed Sep 23 11:01:16 2020 From: arundhat at andrew.cmu.edu (Arundhati Banerjee) Date: Wed, 23 Sep 2020 11:01:16 -0400 Subject: LOV7 Full Message-ID: LOV7 scratch is full. Please free up space where possible. Thank you! Best regards, Arundhati Filesystem Size Used Avail Use% Mounted on /dev/mapper/sl-home 169G 169G 20K 100% /home -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngisolfi at cs.cmu.edu Thu Sep 24 11:01:52 2020 From: ngisolfi at cs.cmu.edu (Nick Gisolfi) Date: Thu, 24 Sep 2020 11:01:52 -0400 Subject: [Lunch] Today @noon over Zoom Message-ID: https://cmu.zoom.us/j/492870487 We hope to see you there! - Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From awertz at cmu.edu Thu Sep 24 15:06:53 2020 From: awertz at cmu.edu (Anthony Wertz) Date: Thu, 24 Sep 2020 15:06:53 -0400 Subject: MATLAB 2020b released, license agreement renewal notice In-Reply-To: References: <7561F3C3-4A90-484A-B4D4-7D38B7BCF6F3@cmu.edu> Message-ID: I didn?t notice before, but if you click the non-obvious ?Licensing updates? link on the MATLAB software page (which goes here https://www.cmu.edu/computing/software/all/matlab/matlab-license-update.html ) it tells you how to update. In short, it?s in the help menu: "update current licenses?" - Anthony > El sep. 23, 2020, a las 10:03, Predrag Punosevac escribi?: > > Actually I am one of CMU license managers although not the chief one. The easiest way normally would be for me just to log in to my MathWorks account from your computer. Let me check the other options and get back to you. > > It has been a year since I played with this. > > Predrag > > On Wed, Sep 23, 2020, 9:39 AM Anthony Wertz > wrote: > Worse than that, using the stand-alone license available on CMU?s website (for 2019b) MATLAB expires in 7 days? and there?s no licensing update available. >:-/ > > On Mathwork?s website I already have an account and linked the license, but it doesn?t give me access to 2020b (or any download for that matter). Any tips on installing it? Otherwise I probably need to find someone at CMU to complain to. > > > - Anthony > >> El sep. 22, 2020, a las 17:38, Predrag Punosevac > escribi?: >> >> Dear Autonians, >> >> MATLAB 2020b was released today. In less than 30 days all our installations including the one on your laptops will stop working due to the license expiration. CMU has renewed the licensing agreement with MathWorks and I have MATLAB 2020b running on my laptop including a bunch of toolboxes. Due to the massive size (close to 30GB) it will take me a few weeks to reinstall MATLAB on our servers. If you need it on your laptop just install from MathWorks and use VPN to activate the CMU licensing server. The server is not under my control. >> >> Predrag > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ImagenPegada-1.tiff Type: image/tiff Size: 54172 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From chiragn at cs.cmu.edu Thu Sep 24 20:51:54 2020 From: chiragn at cs.cmu.edu (Chirag Nagpal) Date: Thu, 24 Sep 2020 20:51:54 -0400 Subject: Fwd: CFP: Machine Learning for Health (ML4H 2020) Workshop at NeurIPS In-Reply-To: References: Message-ID: Fabian (yes, auton Fabian ) asked me to forward NeurIPS ML4H Call for Papers to the mailing list. They are also soliciting reviewers for the workshop -- Please check the attached mail below. Chirag ---------- Forwarded message --------- From: Fabian Falck Date: Thu, Sep 24, 2020 at 5:57 PM Subject: CFP: Machine Learning for Health (ML4H 2020) Workshop at NeurIPS To: Chirag Nagpal Hi Chirag, I very much hope you are doing well - hope we can catch up soon! Would you mind forwarding the below CFP to the users at autonlab.org email distribution list on my behalf? The registration deadline is in 4 days, and I am sure many Autonians might consider submitting either an extended abstract or full paper. Also, it would be great if you could point to this form if anyone is interested in becoming a Programme Committee member by contributing reviews. Many thanks for your efforts! Fabian --- ML4H 2020 invites submissions describing innovative machine learning research focused on relevant problems in health and biomedicine. Similar to last year, *ML4H 2020 will both accept papers for a formal proceedings and accept traditional, non-archival extended abstract submissions*. Authors are invited to submit works for either track provided the work fits within the purview of Machine Learning for Health. In addition, we especially solicit works that speak to this year?s ML4H theme: Advancing Healthcare for All, such as work which explores any of the following topics: - Accessible diagnostic and prognostic systems - Health equity - Fairness and bias in machine learning systems - Generalisation across populations or systems - Improving patient participation in health - Augmenting and supporting the capabilities of healthcare workers - Rare or underserved diseases - Democratising ML4H research - Non-traditional delivery of healthcare We are also piloting mentorship programs for authors and reviewers, and, for the first time, hosting several awards for high quality submissions and reviews! See workshop website for the full CFP and more details, and please direct any questions to: ml4h.workshop.neurips.2020 at gmail.com *Important Dates* - Monday, Sep. 28 AoE: Submission Title/Summary Paragraph Deadline - Friday, Oct. 2: Submission Deadline - Friday, Oct. 16: Reviews Due - Tuesday, Oct. 20: Limited Author Response Deadline - Wednesday, Oct. 28th: Final Decisions Released - Friday or Saturday, Dec 11-12, 2020: Virtual Workshop -- *Chirag Nagpal* PhD Student, Auton Lab School of Computer Science Carnegie Mellon University cs.cmu.edu/~chiragn -------------- next part -------------- An HTML attachment was scrubbed... URL: From awd at cs.cmu.edu Fri Sep 25 14:35:17 2020 From: awd at cs.cmu.edu (Artur Dubrawski) Date: Fri, 25 Sep 2020 14:35:17 -0400 Subject: Fwd: Thesis Proposal - Oct. 2, 2020 - Otilia Stretcu - Curriculum Learning In-Reply-To: References: Message-ID: appears quite relevant to a few of us ---------- Forwarded message --------- From: Diane Stidle Date: Fri, Sep 25, 2020 at 12:40 PM Subject: Thesis Proposal - Oct. 2, 2020 - Otilia Stretcu - Curriculum Learning To: ml-seminar at cs.cmu.edu , Rich Caruana < rcaruana at microsoft.com> *Thesis Proposal* Date: October 2, 2020 Time: 3:30pm (EDT) Speaker: Otilia Stretcu Zoom Meeting: https://cmu.zoom.us/j/94122713476?pwd=VlFqOWYrRkFQMnZaSE0vTXFtT3pRdz09 Meeting ID: 941 2271 3476 Passcode: 866793 *Title: Curriculum Learning* Abstract: AI researchers often disagree about the best strategy to train a machine learning system, but there is one belief that is generally agreed upon: humans are still much better learners than machines. Unlike AI systems, humans do not learn difficult new tasks (e.g., solving differential equations) from scratch, by looking at independent and identically distributed examples of the task being performed by someone else. Instead, new skills are often built progressively, starting with easier tasks and gradually becoming able to perform harder ones. Curriculum Learning (CL) is a line of work that tries to incorporate this human approach to learning into machine learning. In this thesis we aim to discover the problem settings in which different forms of CL are beneficial, and the types of benefits they provide. Our completed work in machine translation and image classification already showcases two different settings in which CL is successful. Next, we plan to take this work further and tackle some problems that are even more challenging for modern machine learning systems, such as function composition and learning to do math with neural networks. If successful, this work could help CL eventually become the standard method for training systems, bringing machine learning one step closer to human intelligence. * Thesis Committee*: Tom Mitchell, Co-Chair Barnab?s P?czos, Co-Chair Ruslan Salakhutdinov Rich Caruana, Microsoft Research -------------- next part -------------- An HTML attachment was scrubbed... URL: