mount a LVM logical volume as OSD:
vgresize / vgcreate ...
lvcreate -l 100%FREE -n ceph pve
ceph-volume lvm prepare --bluestore --data pve/ceph
ceph-volume lvm activate --all
clean the disk for OSD usage :
ceph-volume lvm zap /dev/sd[X] --destroy
edit the crushmap
ceph osd tree
ceph osd crush reweight {name} {weight}
ceph osd crush set {id-or-name} {weight} root={pool-name} [{bucket-type}={bucket-name} ...]
or by hand :
ceph osd getcrushmap -o map.bin
crushtool -d map.bin -o map.txt
edit map.txt
crushtool -c map.txt -o map.bin
ceph osd setcrushmap -i map.bin
check the difference with the previous map :
crushtool -i crushmap --compare crushmap.new
create a 'bestred' rule:
ceph osd crush rule create-replicated bestred default osd
assign the rule to the pool / change the rule of the pool
ceph osd pool set redpool crush_rule bestred
create a rbd :
rbd create --size {megabytes} {pool-name}/{image-name}
see where the object is mapped :
ceph osd map <pool_name> <rbd_name>
increase the number of replicas
ceph osd pool set bestred size 8
ceph osd pool set bestred min_size 4
change attributes
ceph osd pool set {pool-name} {field} {value}Valid fields are:
size
: Sets the number of copies of data in the pool.pg_num
: The placement group number.pgp_num
: Effective number when calculating pg placement.crush_rule
: rule number for mapping placement.
autoscaling
since Nautilus, enable the module first :autoscaling
ceph mgr module enable pg_autoscaler
ceph osd pool autoscale-status
change : ceph osd pool set pool2 pg_autoscale_mode {warn,off}
reduce osd cache memory usage
reduce osd cache memory usage
(for osd.7 in this example)
change the memory usage (previously 4G) :
ceph daemon osd.7 config set osd_memory_target 1610612736
or
ceph config set osd.7 osd_memory_target 1610612736
to be permanent
or as global (all osd in this case):
ceph config set global osd_memory_target <value>
we can also add it in the config ceph.conf :
[global]
... some config
osd_memory_target = 939524096
Aucun commentaire:
Enregistrer un commentaire