You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

(from https://www.rc.virginia.edu/userinfo/rivanna/queues/)

PartitionMax time / jobMax nodes / jobMax cores / jobMax cores / nodeMax memory / coreMax memory / node / jobSU Charge Rate
standard7 days140409GB375GB1.00
parallel3 days251000409GB375GB1.00
largemem4 days1161660GB975GB1.00
gpu3 days4101032GB375GB3.00 *
dev1 hour2846GB36GB0.00

Sample slurm scripts: https://www.rc.virginia.edu/userinfo/rivanna/slurm/

Multiprocessing Example


  1.  Let's say we need to parallelize 10 parallel tasks in each job, and let's say we need to submit 20 jobs (so 200 tasks in total)

    Main.py >>>

    def run_replica(i):
        job_number = sys.argv[1]
        replica_number = 10*int(sys.argv[1]) + i

    if __name__ == '__main__':
        jobs = []
        for i in range(10):
            p = multiprocessing.Process(target=run_replica, args=(i,))
            jobs.append(p)
            p.start()


    job.slurm >>>

    #!/bin/sh
    #SBATCH --nodes=20
    #SBATCH --ntasks-per-node=10
    #SBATCH --time=10:00:00
    #SBATCH --output=slurm.out
    #SBATCH --error=slurm.err
    #SBATCH --partition=parallel
    #SBATCH -A spinquest_standard
    #SBATCH --array=0-20


    module load openmpi


    srun python3 Main.py $SLURM_ARRAY_TASK_ID


      


  • No labels