mercredi 6 novembre 2019

connect Debian to Proxmox CEPH cluster and mount Cephfs

on client create the destination directory

mkdir -p /etc/pve/priv/


copy the keyring on the debian client: (uses admin keyring !!)

scp <proxmox_ip>:/etc/pve/priv/ceph.client.admin.keyring /etc/pve/priv/.

copy the configuration file :

scp <proxmox_ip>:/etc/pve/ceph.conf /etc/ceph/.

test

ceph status

#####

mount Cephfs on debian

1. on ceph:


generate a keyfile for client foo on proxmox:
ceph auth get-or-create client.foo \
    mds 'allow rw path=/ceph/mount/point' \
    mon 'allow r' \
    osd 'allow rw pool=cephfs_data' \
    -o /etc/pve/priv/ceph.client.foo.keyring
 
check the client with :
ceph auth list

generate a minimal config :
ceph config generate-minimal-conf

and copy the output in /etc/ceph/ceph.conf on the client


 

2. on client:
aptitude install libcephfs2 ceph-common ceph-fuse 

copy the keyring file from the client
 
mkdir -p /local/mount/point
 

add an entry in /etc/fstab like :
id=foo,conf=/etc/ceph/ceph.conf,client_mountpoint=/ceph/mount/point /local/mount/point fuse.ceph _netdev,defaults 0 0
 
_netdev here is important, otherwhise it does not boot 
 
 
3. then mount it 
sudo mount -a 

 
  
 

migrating from XCP-ng (xen) to Proxmox (LVM)

create the VM, be the closest to the XCP-ng VM on Proxmox
detach the disk

export the VM with Xencenter in ova/ovf format (GUI)

extract the ova : 

tar -xvf <ova_file.ova>
this result in a vhd + ovf file

convert the filedisk format:

qemu-img convert -f vpc <disk_file.vhd> -O qcow2 <disk_filename.qcow2>

import the disk to the vm :  

qm importdisk <vm_id> <disk_filename.qcow2> <storage> -format qcow2

attach the disk to the vm, then boot



vendredi 11 octobre 2019

CEPH

mount a LVM logical volume as OSD:


vgresize / vgcreate ...
lvcreate -l 100%FREE -n ceph pve
ceph-volume lvm prepare --bluestore --data pve/ceph
ceph-volume lvm activate --all


clean the disk for OSD usage :

ceph-volume lvm zap /dev/sd[X] --destroy


edit the crushmap


ceph osd tree
ceph osd crush reweight {name} {weight}
ceph osd crush set {id-or-name} {weight} root={pool-name} [{bucket-type}={bucket-name} ...]

or by hand :
ceph osd getcrushmap -o map.bin
crushtool -d map.bin -o map.txt
edit map.txt
crushtool -c map.txt -o map.bin
ceph osd setcrushmap -i map.bin

check the difference with the previous map :
crushtool -i crushmap --compare crushmap.new

create a 'bestred' rule:

ceph osd crush rule create-replicated bestred default osd


assign the rule to the pool / change the rule of the pool
ceph osd pool set redpool crush_rule bestred


create a rbd :
rbd create --size {megabytes} {pool-name}/{image-name}

see where the object is mapped :
ceph osd map <pool_name> <rbd_name> 

increase the number of replicas
ceph osd pool set bestred size 8
ceph osd pool set bestred min_size 4

 
 

change attributes


ceph osd pool set {pool-name} {field} {value}
Valid fields are:
  • size: Sets the number of copies of data in the pool.
  • pg_num: The placement group number.
  • pgp_num: Effective number when calculating pg placement.
  • crush_rule: rule number for mapping placement.

autoscaling

since Nautilus, enable the module first :
ceph mgr module enable pg_autoscaler

ceph osd pool autoscale-status

change : ceph osd pool set pool2 pg_autoscale_mode {warn,off}

reduce osd cache memory usage

(for osd.7 in this example)

change the memory usage (previously 4G) :
ceph daemon osd.7 config set osd_memory_target 1610612736
or
ceph config set osd.7 osd_memory_target 1610612736 
to be permanent 

or as global (all osd in this case):
ceph config set global osd_memory_target <value>

we can also add it in the config ceph.conf :
[global]
... some config
     osd_memory_target = 939524096








mercredi 25 septembre 2019

enable XCP-ng nested virtualisation

Enable XCP-ng nested virtualisation:

used here to test proxmox under XCP-ng as host

# xe vm-param-set uuid=<uuid> platform:exp-nested-hvm=true
# xe vm-param-set uuid=<uuid> platform:nic_type="e1000"




found here :
https://github.com/xcp-ng/xcp/wiki/Testing-XCP-ng-in-Virtual-Machine-(Nested-Virtualization)

vendredi 20 septembre 2019

Mysql / Adminer / Apache in FreeBSD jail

Mysql / Adminer / Apache in FreeBSD jail

 

Prerequisites :
create the jail, and add a ZFS mount to store the databases in another dataset
mount the dataset to /mnt/db-data


install mysql

# install + start at boot with custom config file
pkg install mariadb104-server
sysrc mysql_enable="YES"
sysrc mysql_pidfile=/var/db/mysql/mysql.pid
sysrc mysql_optfile=/usr/local/etc/mysql/my.cnf

# set proper owner for the zfs mounted dataset
chown -R mysql:mysql /mnt/db-data

-> restart the jail

# setup the database
mysql_secure_installation --socket=/mnt/db-data/mysql.sock



logs location : /var/db/mysql/*.log

install adminer

pkg install adminer
the adminer php file is at
/usr/local/www/adminer/adminer/index.php


install apache

# in order to serve adminer
pkg install apache24
sysrc apache24_enable="yes"
service apache24 start
add to /usr/local/etc/apache24/Includes/adminer.conf the config :
<VirtualHost *:80>
        ServerName 127.0.0.1
        ServerAlias adminer
        DocumentRoot "/usr/local/www/adminer/adminer/"

        ErrorLog "/var/log/adminer-error.log"
        CustomLog "/var/log/adminer-access_log" combined

</VirtualHost>





setup php 

# give a php configuration file
cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini

# needed to work with adminer
change line mysqli.default_socket=
mysqli.default_socket = /mnt/db-data/mysql.sock


# make apache interpret the php
add this to /usr/local/etc/apache24/Includes/php.conf
<IfModule dir_module>
    DirectoryIndex index.php index.html
    <FilesMatch "\.php$">
        SetHandler application/x-httpd-php
    </FilesMatch>
    <FilesMatch "\.phps$">
        SetHandler application/x-httpd-php-source
    </FilesMatch>
</IfModule>


service apache24 reload


vendredi 25 janvier 2019

Check SSH key locally

ssh-keygen -lf <pubkey_file>

ex : ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub

Also use :
ssh-keyscan 127.0.0.1 | ssh-keygen -lf -

on remote :
ssh-keyscan <remote> | ssh-keygen -lf -


If needed to get it in md5 hashing algorythm :
for file in ~/.ssh/*.pub; do ssh-keygen -lf $file -E md5 -g; done

jeudi 7 juin 2018

Configure custom ntp server on Openelec

Change the Timeservers setting in /storage/.cache/connman/settings to the desired value

lundi 23 janvier 2017

Android tools

Here are the packages name of the tools used for Android :

android-tools-fastboot
android-tools-adb

mercredi 4 janvier 2017

Install a CA on debian systems

Installing a CA Certificate

Given a CA certificate file foo.crt, follow these steps to install it on Ubuntu:


1. first method : Create a directory for extra CA certificates in /usr/share/ca-certificates:

sudo mkdir /usr/share/ca-certificates/extra


Copy the CA .crt file to this directory:
sudo cp foo.crt /usr/share/ca-certificates/extra/foo.crt


Let Ubuntu add the .crt file's path relative to /usr/share/ca-certificates to /etc/ca-certificates.conf:
sudo dpkg-reconfigure ca-certificates



2. second method - without user interaction (which I use for docker): copy the certificate in /usr/local/share/

execute update-ca-certificates :

Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.


virtual environnements sous python

python3 venv ~/.virtualenvs/<env_name>
OR virtualenv <env_name>

source ~/.virtualenvs/<env_name>/bin/activate

<env_name> ...

deactivate