User Tools

Site Tools


using_ogs_sge

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
using_ogs_sge [2018/02/20 21:48]
mgstauff [Output from your job]
using_ogs_sge [2018/03/02 20:35] (current)
mgstauff [Per-job memory limit]
Line 61: Line 61:
  
   [mgstauff@chead ~]$ qsub myjobscript   [mgstauff@chead ~]$ qsub myjobscript
-  Your job 27657 ("myjob") has been submitted+  Your job 27657 ("myjobscript") has been submitted
      
 Here's an example BASH script that could be in the file named ''myjobscript'' (you can cut-n-paste into a text editor on the cluster to try it yourself): Here's an example BASH script that could be in the file named ''myjobscript'' (you can cut-n-paste into a text editor on the cluster to try it yourself):
Line 476: Line 476:
  
 ==== Per-job memory limit ==== ==== Per-job memory limit ====
-There is a limit of 62GB per job at this point. This allows a single ''qlogin'' session to run a large memory job on a single compute node.+There is a limit of 30GB per job at this point for jobs running on the default queue, 'all.q'. See notes on the himem.q queue on this page if your job uses more memory.
  
 NOTE that if you request this much memory, you might have to wait for a node to become free since this means using most of a node's memory resources, and your job might be slowed along with other jobs on the node because memory swap space will most likely be used. NOTE that if you request this much memory, you might have to wait for a node to become free since this means using most of a node's memory resources, and your job might be slowed along with other jobs on the node because memory swap space will most likely be used.
using_ogs_sge.1519163313.txt.gz ยท Last modified: 2018/02/20 21:48 by mgstauff