{"id":947,"date":"2020-11-09T14:53:44","date_gmt":"2020-11-09T13:53:44","guid":{"rendered":"https:\/\/itrop.ird.fr\/wordpress\/?page_id=947"},"modified":"2022-04-06T14:48:55","modified_gmt":"2022-04-06T12:48:55","slug":"trainings-2019-admin-hpc-module-2-installation-slurm-fr","status":"publish","type":"page","link":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/","title":{"rendered":"Trainings 2019 &#8211; Admin HPC &#8211; module 2 &#8211; installation slurm &#8211; FR"},"content":{"rendered":"<h2>Installation de slurm<\/h2>\n<table>\n<thead>\n<tr>\n<th style=\"text-align: left;\">Description<\/th>\n<th style=\"text-align: left;\">Installation de Slurm sur centos 7<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td style=\"text-align: left;\">Supports de cours li\u00e9s<\/td>\n<td style=\"text-align: left;\"><a href=\"https:\/\/itrop.ird.fr\/wordpress\/index.php\/trainings-2019-admin-hpc-module-2\/\">HPC Administration Module2<\/a><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: left;\">Authors<\/td>\n<td style=\"text-align: left;\">Ndomassi TANDO (ndomassi.tando@ird.fr)<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: left;\">Creation Date<\/td>\n<td style=\"text-align: left;\">23\/09\/2019<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: left;\">Last Modified Date<\/td>\n<td style=\"text-align: left;\">23\/09\/2019<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h3>Sommaire<\/h3>\n<p><!-- TOC depthFrom:2 depthTo:2 withLinks:1 updateOnSave:1 orderedList:0 --><\/p>\n<ul>\n<li><a href=\"#part-1\">Definition<\/a><\/li>\n<li><a href=\"#part-2\">Authentification et bases de donn\u00e9es<\/a><\/li>\n<li><a href=\"#part-3\">Installation de Slurm<\/a><\/li>\n<li><a href=\"#part-4\">Configuration des limites d'utilisation<\/a><\/li>\n<li><a href=\"#links\">Liens<\/a><\/li>\n<li><a href=\"#license\">License<\/a><\/li>\n<\/ul>\n<hr \/>\n<p><a name=\"part-1\"><\/a><\/p>\n<h2>Definition<\/h2>\n<p>Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.<\/p>\n<p><a href=\"https:\/\/slurm.schedmd.com\/\">https:\/\/slurm.schedmd.com\/<\/a><\/p>\n<hr \/>\n<p><a name=\"part-2\"><\/a><\/p>\n<h2>Authentification et bases de donn\u00e9es:<\/h2>\n<h3>Cr\u00e9er les utilisateurs pour munge et slurm:<\/h3>\n<p>Slurm et Munge requi\u00e8rent d'avoir les m\u00eames UID et GID sur chaque noeud du cluster.<br \/>\nPour tous les noeuds, lancer les commandes suivantes avant d'installer Slurm ou Munge:<\/p>\n<pre><code>$ export MUNGEUSER=1001\n$ groupadd -g $MUNGEUSER munge\n$ useradd  -m -c &quot;MUNGE Uid &#039;N&#039; Gid Emporium&quot; -d \/var\/lib\/munge -u $MUNGEUSER -g munge  -s \/sbin\/nologin munge\n$ export SLURMUSER=1002\n$ groupadd -g $SLURMUSER slurm\n$ useradd  -m -c &quot;SLURM workload manager&quot; -d \/var\/lib\/slurm -u $SLURMUSER -g slurm  -s \/bin\/bash slurm<\/code><\/pre>\n<h3>Installation de Munge pour l'authentification:<\/h3>\n<pre><code>$ yum install epel-release -y<\/code><\/pre>\n<pre><code>$ yum install munge munge-libs munge-devel -y<\/code><\/pre>\n<h4>Cr\u00e9er une cl\u00e9 d'authentification Munge:<\/h4>\n<pre><code>$ \/usr\/sbin\/create-munge-key<\/code><\/pre>\n<h4>Copier la cl\u00e9 d'authentification sur chaque noeud:<\/h4>\n<pre><code>$ cp \/etc\/munge\/munge.key \/home\n$ cexec cp \/home\/munge.key \/etc\/munge<\/code><\/pre>\n<h4>Mettre les droits:<\/h4>\n<pre><code>$ chown -R munge: \/etc\/munge\/ \/var\/log\/munge\/ \/var\/lib\/munge\/ \/run\/munge\/\n$ chmod 0700 \/etc\/munge\/ \/var\/log\/munge\/ \/var\/lib\/munge\/ \/run\/munge\/\n$ cexec chown -R munge: \/etc\/munge\/ \/var\/log\/munge\/ \/var\/lib\/munge\/ \/run\/munge\/\n$ cexec chmod 0700 \/etc\/munge\/ \/var\/log\/munge\/ \/var\/lib\/munge\/ \/run\/munge\/<\/code><\/pre>\n<h4>Activer et d\u00e9marrer le service munge service:<\/h4>\n<pre><code>$ systemctl enable munge\n$ systemctl start munge\n$ cexec systemctl enable munge\n$ cexec systemctl start munge<\/code><\/pre>\n<h4>Tester munge depuis la machine ma\u00eetre:<\/h4>\n<pre><code>$ munge -n | unmunge\n$ munge -n | ssh &lt;somehost_in_cluster&gt; unmunge<\/code><\/pre>\n<h3>installation et configuration de Mariadb<\/h3>\n<h4>Installer mariadb avec la commande:<\/h4>\n<pre><code>$ yum install mariadb-server -y<\/code><\/pre>\n<h4>Activer et d\u00e9marrer le service mariadb:<\/h4>\n<pre><code>$ systemctl start mariadb\nsystemctl enable mariadb<\/code><\/pre>\n<h4>s\u00e9curiser l'installation:<\/h4>\n<p>Mettre en place un mot de passe root pour mariadb:<\/p>\n<pre><code>$ mysql_secure_installation<\/code><\/pre>\n<h4>Modifier la configuration innodb :<\/h4>\n<p>Mettre des valeurs plus importantes pour innodb_lock_wait_timeout,innodb_log_file_size:<\/p>\n<p>Cr\u00e9er le fichier <code>\/etc\/my.cnf.d\/innodb.cnf<\/code> avec les lignes suivantes:<\/p>\n<pre><code>[mysqld]\n innodb_buffer_pool_size=1024M\n innodb_log_file_size=64M\n innodb_lock_wait_timeout=900<\/code><\/pre>\n<p>Pour mettre en place ces changements, il faut red\u00e9marrer et supprimer les fichiers de logs:<\/p>\n<pre><code>$ systemctl stop mariadb\n mv \/var\/lib\/mysql\/ib_logfile? \/tmp\/\n systemctl start mariadb<\/code><\/pre>\n<hr \/>\n<p><a name=\"part-3\"><\/a><\/p>\n<h2>Installation de Slurm:<\/h2>\n<h3>Installer les pr\u00e9-requis:<\/h3>\n<pre><code>$ yum install openssl openssl-devel pam-devel rpmbuild numactl numactl-devel hwloc hwloc-devel lua lua-devel readline-devel rrdtool-devel ncurses-devel man2html libibmad libibumad -y<\/code><\/pre>\n<h3>R\u00e9cup\u00e9rer le tarball<\/h3>\n<pre><code>$ wget https:\/\/download.schedmd.com\/slurm\/slurm-19.05.0.tar.bz2<\/code><\/pre>\n<h3>Cr\u00e9er les RPMs:<\/h3>\n<pre><code>$ rpmbuild -ta slurm-19.05.0.tar.bz2<\/code><\/pre>\n<p>Les RPMs sont situ\u00e9s dans  \/root\/rpmbuild\/RPMS\/x86_64\/<\/p>\n<h3>Installer slurm sur la machine ma\u00eetre et les noeuds<\/h3>\n<p>Dans le r\u00e9pertoire des RPMs, lancer la commande: <\/p>\n<pre><code>$ yum --nogpgcheck localinstall slurm-*<\/code><\/pre>\n<h3>Cr\u00e9er et configurer la base de donn\u00e9es slurm_acct_db:<\/h3>\n<pre><code>$ mysql -u root -p\n mysql&gt; grant all on slurm_acct_db.* TO &#039;slurm&#039;@&#039;localhost&#039; identified by &#039;some_pass&#039; with grant option;\n mysql&gt; create database slurm_acct_db;<\/code><\/pre>\n<h3>Configurer la slurm db backend:<\/h3>\n<p>Modifier <code>\/etc\/slurm\/slurmdbd.conf<\/code> avec les param\u00e8tres suivants:<\/p>\n<pre><code>AuthType=auth\/munge\n  DbdAddr=192.168.1.250\n  DbdHost=master0\n  SlurmUser=slurm\n  DebugLevel=4\n  LogFile=\/var\/log\/slurm\/slurmdbd.log\n  PidFile=\/var\/run\/slurmdbd.pid\n  StorageType=accounting_storage\/mysql\n  StorageHost=master0\n  StoragePass=some_pass\n  StorageUser=slurm\n  StorageLoc=slurm_acct_db<\/code><\/pre>\n<p>Ensuite activer et lancer le service slurmdbd <\/p>\n<pre><code>$ systemctl start slurmdbd\n$ systemctl enable slurmdbd\n$ systemctl status slurmdbd<\/code><\/pre>\n<p>Cela permettra de cr\u00e9er les tables de la base slurm_acct_db.<\/p>\n<h3>Configuration du fichier \/etc\/slurm\/slurm.conf:<\/h3>\n<p>Lancer la commande <code>lscpu<\/code> sur chacun des noeuds pour avoir des informations sur les processeurs.<\/p>\n<p>Aller sur  <a href=\"http:\/\/slurm.schedmd.com\/configurator.easy.html\">http:\/\/slurm.schedmd.com\/configurator.easy.html<\/a> pour cr\u00e9er un fichier de configuration pour Slurm.<\/p>\n<p>Modifier les param\u00e8tres suivants dans <code>\/etc\/slurm\/slurm.conf<\/code> en fonction des caract\u00e9ristiques de votre cluster:<\/p>\n<pre><code>ClusterName=IRD\n ControlMachine=master0\n ControlAddr=192.168.1.250\n SlurmUser=slurm\n AuthType=auth\/munge\n StateSaveLocation=\/var\/spool\/slurmd\n SlurmdSpoolDir=\/var\/spool\/slurmd\n SlurmctldLogFile=\/var\/log\/slurm\/slurmctld.log\n SlurmdDebug=3\n SlurmdLogFile=\/var\/log\/slurm\/slurmd.log\n AccountingStorageHost=master0\n AccountingStoragePass=3devslu!!\n AccountingStorageUser=slurm\n NodeName=node21 CPUs=16 Sockets=4 RealMemory=32004 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN\n PartitionName=r900 Nodes=node21 Default=YES MaxTime=INFINITE State=UP<\/code><\/pre>\n<p>Il faut maintenant envoyer les fichiers slurm.conf et slurmdbd.conf sur tous les noeuds de calcul.<\/p>\n<pre><code>$ cp \/etc\/slurm\/slurm.conf \/home\n$ cp \/etc\/slurm\/slurmdbd.conf \/home\n$ cexec cp \/home\/slurm.conf \/etc\/slurm\n$ cexec cp \/home\/slurmdbd.conf \/etc\/slurm<\/code><\/pre>\n<h3>Cr\u00e9er les r\u00e9pertoires pour accueillir les logs<\/h3>\n<h4>Sur la machine ma\u00eetre:<\/h4>\n<pre><code>$ mkdir \/var\/spool\/slurmctld\n$ chown slurm:slurm \/var\/spool\/slurmctld\n$ chmod 755 \/var\/spool\/slurmctld\n$ mkdir  \/var\/log\/slurm\n$ touch \/var\/log\/slurm\/slurmctld.log\n$ touch \/var\/log\/slurm\/slurm_jobacct.log \/var\/log\/slurm\/slurm_jobcomp.log\n$ chown -R slurm:slurm \/var\/log\/slurm\/<\/code><\/pre>\n<h4>Sur les noeuds de calcul:<\/h4>\n<pre><code>$ mkdir \/var\/spool\/slurmd\n$ chown slurm: \/var\/spool\/slurmd\n$ chmod 755 \/var\/spool\/slurmd\n$ mkdir \/var\/log\/slurm\/\n$ touch \/var\/log\/slurm\/slurmd.log\n$ chown -R slurm:slurm \/var\/log\/slurm\/slurmd.log<\/code><\/pre>\n<h4>tester la configuration:<\/h4>\n<pre><code>$ slurmd -C<\/code><\/pre>\n<p>On doit obtenir quellque chose comme:<\/p>\n<pre><code>NodeName=master0 CPUs=16 Boards=1 SocketsPerBoard=2 CoresPerSocket=4 ThreadsPerCore=2 RealMemory=23938 UpTime=22-10:03:46<\/code><\/pre>\n<h4>Lancer le service slurmd sur les noeuds de calcul:<\/h4>\n<pre><code>$ systemctl enable slurmd.service\n$ systemctl start slurmd.service\n$ systemctl status slurmd.service<\/code><\/pre>\n<h4>Lancer le service slurmctld sur la machine ma\u00eetre:<\/h4>\n<pre><code>$ systemctl enable slurmctld.service\n$ systemctl start slurmctld.service\n$ systemctl status slurmctld.service<\/code><\/pre>\n<h4>Changer l'\u00e9tat d'un noeud de down \u00e0 idle<\/h4>\n<pre><code>$ scontrol update NodeName=nodeX State=RESUME<\/code><\/pre>\n<p>O\u00f9 nodeX  est le nom du noeud.<\/p>\n<h4>Modification du fichier de configuration \/etc\/slurm\/slurm.conf:<\/h4>\n<p>Lorsque l'on modifie le fichier <code>\/et\/slurm\/slurm.conf<\/code> il faut propager ce fichier sur tous les noeuds puis taper la commande suivante sur la machine ma\u00eetre:<\/p>\n<pre><code>$ scontrol reconfig<\/code><\/pre>\n<hr \/>\n<p><a name=\"part-4\"><\/a><\/p>\n<h2>Configurer les limites d'utilisation<\/h2>\n<h3>Modifier le fichier \/etc\/slurm\/slurm.conf<\/h3>\n<p>Modifier le param\u00e8tre  <code>AccountingStorageEnforce<\/code> avec:<\/p>\n<pre><code>AccountingStorageEnforce=limits<\/code><\/pre>\n<p>Copier le fichier modifi\u00e9 sur les noeuds<\/p>\n<p>Red\u00e9marrer le service slurmctld pour mettre en place ces modifications:<\/p>\n<pre><code>$ systemctl restart slurmctld<\/code><\/pre>\n<h3>Cr\u00e9er un cluster:<\/h3>\n<p>Le  cluster est le nom que l'on veut donner au cluster slurm.<\/p>\n<p>Dans le fichier <code>\/etc\/slurm\/slurm.conf<\/code>, changer la ligne suivante:<\/p>\n<pre><code>ClusterName=ird<\/code><\/pre>\n<p>Pour mettre en place des limites d'utilisation, il faut cr\u00e9er un <code>accounting cluster<\/code> avec la commande:<\/p>\n<pre><code>$sacctmgr add cluster ird<\/code><\/pre>\n<h3>Cr\u00e9er un accounting account<\/h3>\n<p>Un <code>accounting account<\/code> est un group cr\u00e9er sous Slurm qui permet \u00e0 l'administrateur de g\u00e9rer les droits des utilisateurs pour utiliser Slurm.<\/p>\n<p>Exemple: cr\u00e9ation d'un groupe pour regrouper les membre de l'\u00e9quipe bioinfo:     <\/p>\n<pre><code>$ sacctmgr add account bioinfo Description=&quot;bioinfo member&quot;<\/code><\/pre>\n<p>Cr\u00e9ation d'un groupe pour permettre aux utilisateurs d'utiliser la parttion gpu <\/p>\n<pre><code>$ sacctmgr add account gpu_group Description=&quot;Members can use the gpu partition&quot;<\/code><\/pre>\n<h3>Cr\u00e9er un user account<\/h3>\n<p>En positionnant la valeur  limts dans le fichier <code>\/etc\/slurm\/slurm.conf<\/code>, on doit cr\u00e9er des utilisateurs slurm pour que ceux-ci puissent lancer des jobs.<\/p>\n<pre><code>$ sacctmgr create user name=xxx DefaultAccount=yyy<\/code><\/pre>\n<h3>Modifier un user account pour le rajouter \u00e0 un nouveau accounting account:<\/h3>\n<pre><code>$ sacctmgr add user xxx Account=zzzz<\/code><\/pre>\n<h3>Modifier la description d'un noeuds de calcul<\/h3>\n<h4>Ajouter le montant  de la partition \/scratch<\/h4>\n<p>Dans le fichier <code>\/etc\/slurm\/slurm.conf<\/code><\/p>\n<h5>Modifier la variable TmpFS avec la valeur de scratch<\/h5>\n<pre><code>$TmpFS=\/scratch<\/code><\/pre>\n<h5>Ajouter  la valeur TmpDisk pour \/scratch<\/h5>\n<p>Le <code>TmpDisk<\/code> est la taille de la partition \/scratch en Mo, \u00e0 rajouter dans la ligne commen\u00e7ant par NodeName <\/p>\n<p>Par exemple, pour un noeud avec 3To de disque:<\/p>\n<pre><code>$ NodeName=node21 CPUs=16 Sockets=4 RealMemory=32004 TmpDisk=3000 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN<\/code><\/pre>\n<h3>Modifier une d\u00e9finition de partition<\/h3>\n<p>Une partition est une file d'attente comportant plusieurs noeuds et de nombreuses caract\u00e9ristiques en terme de limites de temps, de m\u00e9moire disponible etc...<\/p>\n<p>Une partition permet de prioriser les jobs entre utilisateurs.<\/p>\n<p>Modifier la ligne commen\u00e7ant par <code>PartitionName<\/code> dans le fichier <code>\/etc\/slurm\/slurm.conf<\/code> .<\/p>\n<p>Plusieurs options sont disponibles selon ce qu'on veut faire<\/p>\n<h4>Ajouter une limitation de temps pour les running jobs (MaxTime)<\/h4>\n<p>Une limitation de temps sur les partitions permet \u00e0 Slurm de g\u00e9rer les priorit\u00e9s entre les jobs sur le m\u00eame noeud.<\/p>\n<p>On peut l'ajouter \u00e0 la ligne commen\u00e7ant par <code>PartitionName<\/code> avec le montant en minutes<\/p>\n<p>Par exemple pour une partition avec 1 jour maximum, la d\u00e9finition de la partition sera:<\/p>\n<pre><code>PartitionName=short Nodes=node21,node[12-15]  MaxTime=1440 State=UP<\/code><\/pre>\n<h4>Ajouter une m\u00e9moire maximum par CPU (MaxMemPerCPU)<\/h4>\n<p>Comme la m\u00e9moire est une ressource consommable, MaxMemPerCPU sert non seulement \u00e0 prot\u00e9ger la m\u00e9moire du noeud mais augmentera automatiquement le nombre de coeurs maximum quand c'est possible.<\/p>\n<p>On doit l'ajouter sur la ligne <code>PartitionName<\/code> avec le montant de la m\u00e9moire en Mo.  <\/p>\n<p>MaxMemPerCPU est normalement fix\u00e9 avec le rapport MaxMem\/NumCores<\/p>\n<p>Par exemple 2Go\/CPU, la d\u00e9finition de la  partition sera<\/p>\n<pre><code>PartitionName=normal Nodes=node21,node[12-15] MaxMemPerCPU=2000 MaxTime=4320 State=UP<\/code><\/pre>\n<hr \/>\n<h3>Liens<\/h3>\n<p><a name=\"liens\"><\/a><\/p>\n<ul>\n<li>Related courses : <a href=\"https:\/\/itrop.ird.fr\/wordpress\/index.php\/trainings-2019-hpc\/\">HPC Trainings<\/a><\/li>\n<\/ul>\n<hr \/>\n<h3>License<\/h3>\n<p><a name=\"license\"><\/a><\/p>\n<div>\nThe resource material is licensed under the Creative Commons Attribution 4.0 International License (<a href=\"http:\/\/creativecommons.org\/licenses\/by-nc-sa\/4.0\/\">here<\/a>).<br \/>\n<center><img decoding=\"async\" width=\"25%\" class=\"img-responsive\" src=\"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png\"\/><br \/>\n<\/center>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Installation de slurm Description Installation de Slurm sur centos 7 Supports de cours li\u00e9s HPC Administration Module2 Authors Ndomassi TANDO&hellip; <br \/> <a class=\"read-more\" href=\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/\">Lire la suite<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":928,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"inline_featured_image":false,"footnotes":""},"class_list":["post-947","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Trainings 2019 - Admin HPC - module 2 - installation slurm - FR - itrop<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Trainings 2019 - Admin HPC - module 2 - installation slurm - FR - itrop\" \/>\n<meta property=\"og:description\" content=\"Installation de slurm Description Installation de Slurm sur centos 7 Supports de cours li\u00e9s HPC Administration Module2 Authors Ndomassi TANDO&hellip; Lire la suite\" \/>\n<meta property=\"og:url\" content=\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/\" \/>\n<meta property=\"og:site_name\" content=\"itrop\" \/>\n<meta property=\"article:modified_time\" content=\"2022-04-06T12:48:55+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@ItropBioinfo\" \/>\n<meta name=\"twitter:label1\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data1\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/\",\"url\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/\",\"name\":\"Trainings 2019 - Admin HPC - module 2 - installation slurm - FR - itrop\",\"isPartOf\":{\"@id\":\"https:\/\/bioinfo.ird.fr\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png\",\"datePublished\":\"2020-11-09T13:53:44+00:00\",\"dateModified\":\"2022-04-06T12:48:55+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#primaryimage\",\"url\":\"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png\",\"contentUrl\":\"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/bioinfo.ird.fr\/index.php\/en\/front-page-2\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Trainings &#8211; FR\",\"item\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Trainings 2019 &#8211; Admin HPC &#8211; module 2\",\"item\":\"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Trainings 2019 &#8211; Admin HPC &#8211; module 2 &#8211; installation slurm &#8211; FR\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/bioinfo.ird.fr\/#website\",\"url\":\"https:\/\/bioinfo.ird.fr\/\",\"name\":\"itrop\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/bioinfo.ird.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/bioinfo.ird.fr\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/bioinfo.ird.fr\/#organization\",\"name\":\"i-Trop\",\"url\":\"https:\/\/bioinfo.ird.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/bioinfo.ird.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/bioinfo.ird.fr\/wp-content\/uploads\/2021\/10\/i-tropTwt5.png\",\"contentUrl\":\"https:\/\/bioinfo.ird.fr\/wp-content\/uploads\/2021\/10\/i-tropTwt5.png\",\"width\":1356,\"height\":1356,\"caption\":\"i-Trop\"},\"image\":{\"@id\":\"https:\/\/bioinfo.ird.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/ItropBioinfo\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Trainings 2019 - Admin HPC - module 2 - installation slurm - FR - itrop","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/","og_locale":"fr_FR","og_type":"article","og_title":"Trainings 2019 - Admin HPC - module 2 - installation slurm - FR - itrop","og_description":"Installation de slurm Description Installation de Slurm sur centos 7 Supports de cours li\u00e9s HPC Administration Module2 Authors Ndomassi TANDO&hellip; Lire la suite","og_url":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/","og_site_name":"itrop","article_modified_time":"2022-04-06T12:48:55+00:00","og_image":[{"url":"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png","type":"","width":"","height":""}],"twitter_card":"summary_large_image","twitter_site":"@ItropBioinfo","twitter_misc":{"Dur\u00e9e de lecture estim\u00e9e":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/","url":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/","name":"Trainings 2019 - Admin HPC - module 2 - installation slurm - FR - itrop","isPartOf":{"@id":"https:\/\/bioinfo.ird.fr\/#website"},"primaryImageOfPage":{"@id":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#primaryimage"},"image":{"@id":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#primaryimage"},"thumbnailUrl":"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png","datePublished":"2020-11-09T13:53:44+00:00","dateModified":"2022-04-06T12:48:55+00:00","breadcrumb":{"@id":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#primaryimage","url":"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png","contentUrl":"http:\/\/creativecommons.org.nz\/wp-content\/uploads\/2012\/05\/by-nc-sa1.png"},{"@type":"BreadcrumbList","@id":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/trainings-2019-admin-hpc-module-2-installation-slurm-fr\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/bioinfo.ird.fr\/index.php\/en\/front-page-2\/"},{"@type":"ListItem","position":2,"name":"Trainings &#8211; FR","item":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/"},{"@type":"ListItem","position":3,"name":"Trainings 2019 &#8211; Admin HPC &#8211; module 2","item":"https:\/\/bioinfo.ird.fr\/index.php\/trainings-fr\/trainings-2019-admin-hpc-module-2\/"},{"@type":"ListItem","position":4,"name":"Trainings 2019 &#8211; Admin HPC &#8211; module 2 &#8211; installation slurm &#8211; FR"}]},{"@type":"WebSite","@id":"https:\/\/bioinfo.ird.fr\/#website","url":"https:\/\/bioinfo.ird.fr\/","name":"itrop","description":"","publisher":{"@id":"https:\/\/bioinfo.ird.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/bioinfo.ird.fr\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/bioinfo.ird.fr\/#organization","name":"i-Trop","url":"https:\/\/bioinfo.ird.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/bioinfo.ird.fr\/#\/schema\/logo\/image\/","url":"https:\/\/bioinfo.ird.fr\/wp-content\/uploads\/2021\/10\/i-tropTwt5.png","contentUrl":"https:\/\/bioinfo.ird.fr\/wp-content\/uploads\/2021\/10\/i-tropTwt5.png","width":1356,"height":1356,"caption":"i-Trop"},"image":{"@id":"https:\/\/bioinfo.ird.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/ItropBioinfo"]}]}},"_links":{"self":[{"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/pages\/947","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/comments?post=947"}],"version-history":[{"count":1,"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/pages\/947\/revisions"}],"predecessor-version":[{"id":948,"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/pages\/947\/revisions\/948"}],"up":[{"embeddable":true,"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/pages\/928"}],"wp:attachment":[{"href":"https:\/\/bioinfo.ird.fr\/index.php\/wp-json\/wp\/v2\/media?parent=947"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}