Management Cluster Ceph » Historique » Version 13
Version 12 (Mehdi Abaakouk, 22/02/2016 15:39) → Version 13/56 (Mehdi Abaakouk, 21/07/2016 11:25)
{{>toc}}
h1. Management Cluster Ceph
h2. Liens
* [[Openstack Management TTNN]]
* [[Openstack Setup VM pas dans openstack]]
* [[Openstack Installation nouvelle node du cluster]]
* [[Openstack Installation TTNN]]
* "Openstack tools for ttnn":/projects/git-tetaneutral-net/repository/openstack-tools
h2. Ajout d'un OSD classique
<pre>
$ ceph-disk prepare --zap-disk --cluster-uuid 1fe74663-8dfa-486c-bb80-3bd94c90c967 --fs-type=ext4 /dev/sdX
$ smartctl --smart=on /dev/sdX # Pour le monitoring.
</pre>
Pour un HDD:
<pre>
$ ceph osd crush add osd.<ID> 0 root=default host=g3
</pre>
Pour un SSD:
<pre>
$ ceph osd crush add osd.<ID> 0 root=ssd host=g3-ssd
</pre>
Ensuite, autoriser Ceph à mettre des data dessus:
<pre>
$ /root/tools/ceph-reweight-osds.sh osd.<ID>
</pre>
h2. Ajout d'un OSD qui partage le SSD avec l'OS
En général avec ceph, on donne un disque, ceph créé 2 partitions une pour le journal de l'OSD, l'autre pour les datas
mais pour le SSD de tetaneutral qui a aussi l'OS, voici la méthode
Création manuelle de la partition de data ceph /dev/sda2 ici
Debian (MBR format):
<pre>
apt-get install partprobe
fdisk /dev/sda
n
p
<enter>
<enter>
<enter>
<enter>
w
$ partprobe
</pre>
Ubuntu (GPT format):
<pre>
# parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA SAMSUNG MZ7KM480 (scsi)
Disk /dev/sdb: 480GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 20.0GB 20.0GB primary ext4 boot
2 20.0GB 36.0GB 16.0GB primary linux-swap(v1)
(parted) mkpart
Partition type? primary/extended?
Partition type? primary/extended? primary
File system type? [ext2]? xfs
Start?
Start? 36.0GB
End? 100%
(parted) print
Model: ATA SAMSUNG MZ7KM480 (scsi)
Disk /dev/sdb: 480GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 20.0GB 20.0GB primary ext4 boot
2 20.0GB 36.0GB 16.0GB primary linux-swap(v1)
3 36.0GB 480GB 444GB primary
(parted) quit
Information: You may need to update /etc/fstab.
</pre>
On prepare le disk comme normalement
<pre>
ceph-disk prepare --fs-type=ext4 --cluster-uuid 1fe74663-8dfa-486c-bb80-3bd94c90c967 /dev/sda2
ceph-disk activate /dev/sda2
ceph osd crush add osd.<ID> 0 root=ssd host=g3-ssd
</pre>
Ensuite, autoriser Ceph à mettre des data dessus:
<pre>
$ /root/tools/ceph-reweight-osds.sh osd.<ID>
</pre>
h2. Suppression d'un OSD:
<pre>
remove_osd(){
name="$1"
name="osd.2"
ceph osd out ${name}
stop ceph-osd id=${name#osd.} # ou sous debian: /etc/init.d/ceph stop ${name}
ceph osd crush remove ${name}
ceph auth del ${name}
ceph osd rm ${name}
ceph osd tree
}
</pre>
h2. Remplacement à froid d'un tier cache:
upstream doc: http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
<pre>
ceph osd tier cache-mode ec8p2c forward
rados -p ec8p2c cache-flush-evict-all
ceph osd tier remove-overlay ec8p2
ceph osd tier remove ec8p2 ec8p2c
rados rmpool ec8p2c ec8p2c --yes-i-really-really-mean-ita
ceph osd pool create ec8p2c 128 128 replicated
ceph osd tier add ec8p2 ec8p2c
ceph osd tier cache-mode ec8p2c writeback
ceph osd tier set-overlay ec8p2 ec8p2c
ceph osd pool set ec8p2c size 3
ceph osd pool set ec8p2c min_size 2
ceph osd pool set ec8p2c hit_set_type bloom
ceph osd pool set ec8p2c hit_set_count 1
ceph osd pool set ec8p2c hit_set_period 3600
ceph osd pool set ec8p2c target_max_bytes 200000000000
ceph osd pool set ec8p2c target_max_objects 10000000
ceph osd pool set ec8p2c cache_target_dirty_ratio 0.4
ceph osd pool set ec8p2c cache_target_full_ratio 0.8
</pre>
h1. Management Cluster Ceph
h2. Liens
* [[Openstack Management TTNN]]
* [[Openstack Setup VM pas dans openstack]]
* [[Openstack Installation nouvelle node du cluster]]
* [[Openstack Installation TTNN]]
* "Openstack tools for ttnn":/projects/git-tetaneutral-net/repository/openstack-tools
h2. Ajout d'un OSD classique
<pre>
$ ceph-disk prepare --zap-disk --cluster-uuid 1fe74663-8dfa-486c-bb80-3bd94c90c967 --fs-type=ext4 /dev/sdX
$ smartctl --smart=on /dev/sdX # Pour le monitoring.
</pre>
Pour un HDD:
<pre>
$ ceph osd crush add osd.<ID> 0 root=default host=g3
</pre>
Pour un SSD:
<pre>
$ ceph osd crush add osd.<ID> 0 root=ssd host=g3-ssd
</pre>
Ensuite, autoriser Ceph à mettre des data dessus:
<pre>
$ /root/tools/ceph-reweight-osds.sh osd.<ID>
</pre>
h2. Ajout d'un OSD qui partage le SSD avec l'OS
En général avec ceph, on donne un disque, ceph créé 2 partitions une pour le journal de l'OSD, l'autre pour les datas
mais pour le SSD de tetaneutral qui a aussi l'OS, voici la méthode
Création manuelle de la partition de data ceph /dev/sda2 ici
Debian (MBR format):
<pre>
apt-get install partprobe
fdisk /dev/sda
n
p
<enter>
<enter>
<enter>
<enter>
w
$ partprobe
</pre>
Ubuntu (GPT format):
<pre>
# parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA SAMSUNG MZ7KM480 (scsi)
Disk /dev/sdb: 480GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 20.0GB 20.0GB primary ext4 boot
2 20.0GB 36.0GB 16.0GB primary linux-swap(v1)
(parted) mkpart
Partition type? primary/extended?
Partition type? primary/extended? primary
File system type? [ext2]? xfs
Start?
Start? 36.0GB
End? 100%
(parted) print
Model: ATA SAMSUNG MZ7KM480 (scsi)
Disk /dev/sdb: 480GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 20.0GB 20.0GB primary ext4 boot
2 20.0GB 36.0GB 16.0GB primary linux-swap(v1)
3 36.0GB 480GB 444GB primary
(parted) quit
Information: You may need to update /etc/fstab.
</pre>
On prepare le disk comme normalement
<pre>
ceph-disk prepare --fs-type=ext4 --cluster-uuid 1fe74663-8dfa-486c-bb80-3bd94c90c967 /dev/sda2
ceph-disk activate /dev/sda2
ceph osd crush add osd.<ID> 0 root=ssd host=g3-ssd
</pre>
Ensuite, autoriser Ceph à mettre des data dessus:
<pre>
$ /root/tools/ceph-reweight-osds.sh osd.<ID>
</pre>
h2. Suppression d'un OSD:
<pre>
remove_osd(){
name="$1"
name="osd.2"
ceph osd out ${name}
stop ceph-osd id=${name#osd.} # ou sous debian: /etc/init.d/ceph stop ${name}
ceph osd crush remove ${name}
ceph auth del ${name}
ceph osd rm ${name}
ceph osd tree
}
</pre>
h2. Remplacement à froid d'un tier cache:
upstream doc: http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
<pre>
ceph osd tier cache-mode ec8p2c forward
rados -p ec8p2c cache-flush-evict-all
ceph osd tier remove-overlay ec8p2
ceph osd tier remove ec8p2 ec8p2c
rados rmpool ec8p2c ec8p2c --yes-i-really-really-mean-ita
ceph osd pool create ec8p2c 128 128 replicated
ceph osd tier add ec8p2 ec8p2c
ceph osd tier cache-mode ec8p2c writeback
ceph osd tier set-overlay ec8p2 ec8p2c
ceph osd pool set ec8p2c size 3
ceph osd pool set ec8p2c min_size 2
ceph osd pool set ec8p2c hit_set_type bloom
ceph osd pool set ec8p2c hit_set_count 1
ceph osd pool set ec8p2c hit_set_period 3600
ceph osd pool set ec8p2c target_max_bytes 200000000000
ceph osd pool set ec8p2c target_max_objects 10000000
ceph osd pool set ec8p2c cache_target_dirty_ratio 0.4
ceph osd pool set ec8p2c cache_target_full_ratio 0.8
</pre>