Maltfield Log/2018 Q3: Difference between revisions
(Created page with "My work log from the year 2018 Quarter 3. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc tha...") |
No edit summary |
||
Line 5: | Line 5: | ||
# [[User:Maltfield]] | # [[User:Maltfield]] | ||
# [[Special:Contributions/Maltfield]] | # [[Special:Contributions/Maltfield]] | ||
=Fri Jul 06, 2018= | |||
# yesterday I calculated that we should backup about ~34.87G of data from hetzner1 to glacier before shutting down the node and terminating its contract | |||
## note that this size will likely be much smaller after compression. | |||
# I confirmed that we have 128G of available space to '/' on hetzner2 | |||
<pre> | |||
[root@hetzner2 ~]# date | |||
Fri Jul 6 17:59:12 UTC 2018 | |||
[root@hetzner2 ~]# df -h | |||
Filesystem Size Used Avail Use% Mounted on | |||
/dev/md2 197G 60G 128G 32% / | |||
devtmpfs 32G 0 32G 0% /dev | |||
tmpfs 32G 0 32G 0% /dev/shm | |||
tmpfs 32G 3.1G 29G 10% /run | |||
tmpfs 32G 0 32G 0% /sys/fs/cgroup | |||
/dev/md1 488M 289M 174M 63% /boot | |||
tmpfs 6.3G 0 6.3G 0% /run/user/1005 | |||
[root@hetzner2 ~]# | |||
</pre> | |||
# we also have 165G of available space on '/usr' on hetzner1 | |||
<pre> | |||
osemain@dedi978:~$ date | |||
Fri Jul 6 19:59:31 CEST 2018 | |||
osemain@dedi978:~$ pwd | |||
/usr/home/osemain | |||
osemain@dedi978:~$ df -h | |||
Filesystem Size Used Avail Use% Mounted on | |||
/dev/dm-0 9.8G 363M 8.9G 4% / | |||
udev 10M 0 10M 0% /dev | |||
tmpfs 787M 788K 786M 1% /run | |||
/dev/dm-1 322G 142G 165G 47% /usr | |||
tmpfs 2.0G 0 2.0G 0% /dev/shm | |||
tmpfs 5.0M 0 5.0M 0% /run/lock | |||
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup | |||
/dev/md0 95M 30M 66M 32% /boot | |||
/dev/mapper/vg-tmp 4.8G 308M 4.3G 7% /tmp | |||
/dev/mapper/vg-var 20G 2.3G 17G 13% /var | |||
tmpfs 2.0G 0 2.0G 0% /var/spool/exim/scan | |||
/dev/mapper/vg-vartmp 5.8G 1.8G 3.8G 32% /var/tmp | |||
osemain@dedi978:~$ | |||
</pre> | |||
# while it may make sense to do this upload to glacier on hetzern1, I've had hetzner1 terminate my screen sessions randomly in the past. I'd rather do it on hetzner2--where I actually have control over the server with root credentials. Therefore, I think I'll make the compressed tarballs on hetzner1 & scp them to hetzner2. On hetzner2 I'll encrypt the tarballs and create their (also encrypted) corresponding metadata files (listing all the files in the tarballs, for easy/cheaper querying later), and upload both. | |||
# I created a wiki article for this CHG, which will be the canonical URL listed in the metadata files for info on what this data is that I've uploaded to glacier | |||
# I discovered that the DBs on hetzner1 are necessarily accessible to the public Internet (ugh). | |||
## so I _could_ do the mysqldump from hetnzer2, but it's better to do it locally (data tx & sec), and then compress it _before_ sending it to hetzner2 | |||
# began backing-up files on osemain | |||
<pre> | |||
# declare vars | |||
stamp=`date -u +%Y%m%d-%H%M%S` | |||
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
excludeArg1="${HOME}/backups" | |||
excludeArg2="${HOME}/noBackup" | |||
# prepare backup dir | |||
mkdir -p "${backupDir}" | |||
# backup home files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* | |||
# backup web root files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* | |||
# dump DBs | |||
dbName=openswh | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
dbName=ose_fef | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
dbName=ose_website | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
dbName=osesurv | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
dbName=osewiki | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
</pre> | |||
# began backups on oseblog | |||
<pre> | |||
# declare vars | |||
stamp=`date -u +%Y%m%d-%H%M%S` | |||
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
excludeArg1="${HOME}/backups" | |||
excludeArg2="${HOME}/noBackup" | |||
# prepare backup dir | |||
mkdir -p "${backupDir}" | |||
# backup home files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* | |||
# backup web root files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* | |||
# dump DB | |||
dbName=oseblog | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
</pre> | |||
# began backups on osecivi | |||
<pre> | |||
stamp=`date -u +%Y%m%d-%H%M%S` | |||
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
excludeArg1="${HOME}/backups" | |||
excludeArg2="${HOME}/noBackup" | |||
# prepare backup dir | |||
mkdir -p "${backupDir}" | |||
# backup home files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* | |||
# backup web root files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* | |||
# dump DBs | |||
dbName=osecivi | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
dbName=osedrupal | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
</pre> | |||
# began backup of oseforum | |||
<pre> | |||
# declare vars | |||
stamp=`date -u +%Y%m%d-%H%M%S` | |||
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
excludeArg1="${HOME}/backups" | |||
excludeArg2="${HOME}/noBackup" | |||
# prepare backup dir | |||
mkdir -p "${backupDir}" | |||
# backup home files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* | |||
# backup web root files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* | |||
# dump DB | |||
dbName=oseforum | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
</pre> | |||
# began backup of microft | |||
<pre> | |||
# declare vars | |||
stamp=`date -u +%Y%m%d-%H%M%S` | |||
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" | |||
excludeArg1="${HOME}/backups" | |||
excludeArg2="${HOME}/noBackup" | |||
# prepare backup dir | |||
mkdir -p "${backupDir}" | |||
# backup home files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* | |||
# backup web root files | |||
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* | |||
# dump DBs | |||
dbName=microft_db2 | |||
dbUser=microft_2 | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
dbName=microft_drupal1 | |||
dbUser=microft_d1u | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
dbName=microft_wiki | |||
dbPass=CHANGEME | |||
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" | |||
</pre> | |||
# after compression (but before encryption), here's the resulting sizes of the backups | |||
## oseforum | |||
<pre> | |||
oseforum@dedi978:~$ find noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; | |||
57M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 | |||
46M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 | |||
oseforum@dedi978:~$ | |||
</pre> | |||
## osecivi 16M | |||
<pre> | |||
osecivi@dedi978:~/noBackup$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; | |||
180K /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2 | |||
2.3M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2 | |||
12M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2 | |||
1.1M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2 | |||
osecivi@dedi978:~/noBackup$ | |||
<pre> | |||
## oseforum 957M | |||
<pre> | |||
oseforum@dedi978:~$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; | |||
854M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 | |||
46M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 | |||
57M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2 | |||
oseforum@dedi978:~$ | |||
</pre> | |||
# created a safe dir on hetzner2 to store the files before encrypting & uploading to glacier | |||
<pre> | |||
[root@hetzner2 tmp]# cd /var/tmp | |||
[root@hetzner2 tmp]# mkdir deprecateHetzner1 | |||
[root@hetzner2 tmp]# chown root:root deprecateHetzner1/ | |||
[root@hetzner2 tmp]# chmod 0700 deprecateHetzner1/ | |||
[root@hetzner2 tmp]# ls -lah deprecateHetzner1/ | |||
total 8.0K | |||
drwx------ 2 root root 4.0K Jul 6 23:14 . | |||
drwxrwxrwt. 52 root root 4.0K Jul 6 23:14 .. | |||
[root@hetzner2 tmp]# | |||
</pre> | |||
# ... | |||
# while the backups were running on hetzner2, I began looking into migrating hetzner2's active daily backups to s3. | |||
# I logged into the aws console for the first time in a couple months, and I saw that our first bill was $5.20 in May, $1.08 in June, and $1.08 in July. Not bad, but that's going to go up after we dump all this hetzner1 stuff in glacier & start using s3 for our dailys. In any case, it'll be far, far, far less than the amount we'll be saving by ending our contract for hetzner1! | |||
# I created our first bucket in s3 named 'oseserverbackups' | |||
## important: it was set to "do not grant public read access to this bucket" ! | |||
# looks like I already created an IAM user & creds with access to both glacier & s3. I added this to hetzner2:/root/backups/backup.settings | |||
# I installed aws for the root user on hetzner2, added the creds, and confirmed that I could access the new bucket | |||
<pre> | |||
# create temporary directory | |||
tmpdir=`mktemp -d` | |||
pushd "$tmpdir" | |||
/usr/bin/wget "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" | |||
/usr/bin/unzip awscli-bundle.zip | |||
./awscli-bundle/install -b ~/bin/aws | |||
# cleanly exit | |||
popd | |||
/bin/rm -rf "$tmpdir" | |||
exit 0 | |||
</pre> | |||
[root@hetzner2 tmp.vbm56CUp50]# aws --version | |||
aws-cli/1.15.53 Python/2.7.5 Linux/3.10.0-693.2.2.el7.x86_64 botocore/1.10.52 | |||
[root@hetzner2 tmp.vbm56CUp50]# aws s3 ls | |||
Unable to locate credentials. You can configure credentials by running "aws configure". | |||
[root@hetzner2 tmp.vbm56CUp50]# aws configure | |||
AWS Access Key ID [None]: <obfuscated> | |||
AWS Secret Access Key [None]: <obfuscated> | |||
Default region name [None]: us-west-2 | |||
Default output format [None]: | |||
[root@hetzner2 tmp.vbm56CUp50]# aws s3 ls | |||
2018-07-07 00:05:22 oseserverbackups | |||
[root@hetzner2 tmp.vbm56CUp50]# | |||
<pre> | |||
# successfully tested an upload to s3 | |||
<pre> | |||
[root@hetzner2 backups]# cat /var/tmp/test.txt | |||
some file destined for s3 this is | |||
[root@hetzner2 backups]# aws s3 cp /var/tmp/test.txt s3://oseserverbackups/test.txt | |||
upload: ../../var/tmp/test.txt to s3://oseserverbackups/test.txt | |||
[root@hetzner2 backups]# | |||
</pre> | |||
# confirmed that I could see the file in the aws console wui | |||
# clicked the link for the object, and confirmed that I got an AccessDenied error https://s3-us-west-2.amazonaws.com/oseserverbackups/test.txt | |||
# next step: enable lifecycle policy. Ideally, I want to be able to say that files uploaded on the first of the month (either by metadata of the upload timestamp or by regex match on object name) will automatically "freeze" into glaicer after a few days, and all other files will just get deleted automatically after a few days. | |||
## so it looks like we can limit by object name match or by tag. It's probably better if we just have our script add a 'monthly-backup' tag to the object when uploading on the first-of-the-month, then have our lifecycle policy built around that bit https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html | |||
## ugh, TIL s3 objects under the default storage class = STANDARD_IA have a minimum lifetime of 30 days. If you delete an object before 30 days, you're still charged for 30 days https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html | |||
### that means we'll have to store 30 copies of our daily backups at minimum, which are 15G as of now. That's 450G stored to s3 = 0.023 * 450 = $10.35/mo * 12 = $124.2/yr. That sucks. | |||
## per my previous research, we may want to look into using one of these providers instead: | |||
### Backblaze B2 https://www.backblaze.com/b2/cloud-storage.html | |||
### Google Nearline & Coldline https://cloud.google.com/storage/archival/ | |||
### Microsoft OneDrive https://onedrive.live.com/about/en-us/ | |||
# a quick calculation on the backblaze price calculator (biased, of course) with initial_upload=15G, monthly_upload=450G, monthly_delete=435G, monthly_download=3G gives a cost of $7.11/year. They say that would cost $30.15/yr on s3, $29.88/yr on google, and $26.10 on Microsoft. Well, at least they're wrong in a good way: it would cost more than that in s3. Hopefully they know their own pricing better. $8/year is great for backing-up 15G every day.. | |||
=Thr Jul 05, 2018= | =Thr Jul 05, 2018= | ||
# logged time for last week | |||
# using my ose account, I uploaded the remaining misc photos from my visit to FeF to a new album https://photos.app.goo.gl/YZGTQdWnfFWcJc6p8 | |||
## I created a slideshow out of this & added it to the wiki here https://wiki.opensourceecology.org/wiki/Michael_Photo_Folder | |||
# ... | |||
# began revisiting hetzner1. We want to dump all the content onto glaicer before we terminate our contract here. | |||
# I just checked the billing section. Wow, it's 74.79 eur per month. What a rip-off! Hopefully we won't have to pay that much longer.. | |||
# because we don't have root, this is more tricky. First, we need to get a list of all the users & investigate what data each has. If the total amount of data is small enough, we can just tar it all up & ship it to glaicer. | |||
# it's not an exact test, but skimming through /etc/passwd suggests that there may be 11 users on hetzner1: osemain, osecivi, oseblog, oseforum, oseirc, oseholla, osesurv, sandbox, microft, zabbix, openswh | |||
# a better test is probably checking which users' shells are /bin/bash | |||
<pre> | |||
osemain@dedi978:~$ grep '/bin/bash' /etc/passwd | |||
root:x:0:0:root:/root:/bin/bash | |||
postgres:x:111:113:PostgreSQL administrator,,,:/var/lib/postgresql:/bin/bash | |||
osemain:x:1010:1010:opensourceecology.org:/usr/home/osemain:/bin/bash | |||
osecivi:x:1014:1014:civicrm.opensourceecology.org:/usr/home/osecivi:/bin/bash | |||
oseblog:x:1015:1015:blog.opensourceecology.org:/usr/home/oseblog:/bin/bash | |||
oseforum:x:1016:1016:forum.opensourceecology.org:/usr/home/oseforum:/bin/bash | |||
osesurv:x:1020:1020:survey.opensourceecology.org:/usr/home/osesurv:/bin/bash | |||
microft:x:1022:1022:test.opensourceecology.org:/usr/home/microft:/bin/bash | |||
osemain@dedi978:~$ | |||
</pre> | |||
# excluding postgres & root, it looks like 6x users (many of the others are addons, and I think they're under 'osemain') = osemain, osecivi, oseblog, oseforum, osesurv, and microft | |||
<pre> | |||
osemain@dedi978:~$ ls -lah public_html/archive/addon-domains/ | |||
total 32K | |||
drwxr-xr-x 8 osemain users 4.0K Jan 18 16:56 . | |||
drwxr-xr-x 13 osemain osemain 4.0K Jun 20 2017 .. | |||
drwxr-xr-x 2 osemain users 4.0K Jul 26 2011 addontest | |||
drwx---r-x 2 osemain users 4.0K Jul 26 2011 holla | |||
drwx---r-x 2 osemain users 4.0K Jul 26 2011 irc | |||
drwxr-xr-x 2 osemain osemain 4.0K Jan 18 16:59 opensourcewarehouse.org | |||
drwxr-xr-x 2 osemain osemain 4.0K Feb 23 2012 sandbox | |||
drwxr-xr-x 13 osemain osemain 4.0K Dec 30 2017 survey | |||
osemain@dedi978:~$ | |||
</pre> | |||
# I was able to ssh in as osemain, osecivi, oseblog, and oseforum (using my pubkey, so I must have set this up earlier when investigating what I needed to migrate). I was _not_ able to ssh in as 'osesurv' and 'microft' | |||
# on the main page of the konsoleh wui after logging in, there's 5 domains listed: "(blog|civicrm|forum|test).opensourceecology.org" and 'opensourceecology.org'. The one that stands out here is 'test.opensourceecology.org'. Upon clicking on it & digging around, I found that the username for this domain is 'microft'. | |||
## In this test = microft domain (in the konsoleh wui), I tried to click 'WebFTP' (which is how I would upload my ssh key), but I got an erorr "Could not connect to server dedi978.your-server.de:21 with user microft". Indeed, it looks like the account is "suspended" | |||
## to confirm further, I clicked the "FTP" link for the forum account, and confirmed that I could ftp in (ugh) as the user & password supplied by the wui (double-ugh). I tried again using the user/pass from the test=microft domain, and I could not login | |||
## ^ that said, It *does* list it as using 4.49G of disk space + 3 DBs | |||
## the 3 DBs are mysql = microft_db2 (24.3M), microft_drupal1 (29.7M), and microft_wiki (19.4G). Holy shit, 19.4G DB! | |||
### digging into the last db's phpmyadmin, I see a table named "wiki_objectcache" that's 4.2G, "wiki_searchindex" that's 2.7G, and "wiki_text" that's 7.4G. This certainly looks like a Mediawiki DB. | |||
### from the wiki_user table, the last user_id = 1038401 = Traci Clutter, which was created on 20150702040307 | |||
# I found that all these accounts are still accessible from a subdomain of our dedi978.your-server.de dns: | |||
## http://blog.opensourceecology.org.dedi978.your-server.de/ | |||
### this one gives a 500 internal server error | |||
## http://civicrm.opensourceecology.org.dedi978.your-server.de/ | |||
### this one actually loads a drupal page with a login, though the only content is " Welcome to OSE CRM / No front page content has been created yet." | |||
## http://forum.opensourceecology.org.dedi978.your-server.de/ | |||
### this one still loads, and appears to be fully functional (ugh) | |||
## http://test.opensourceecology.org.dedi978.your-server.de/ | |||
### this gives a 403 forbidden with the comment "You don't have permission to access / on this server." "Server unable to read htaccess file, denying access to be safe" | |||
# In digging through the test.opensourceecology.org domain's settings, I found "Services -> Settings -> Block / Unblock". It (unlike the others) was listed as "Status: Blocked." So I clicked the "Unblock it" button and got "The domain has been successfully unblocked.". | |||
## now WebFTP worked | |||
## this now loads too http://test.opensourceecology.org.dedi978.your-server.de/ ! It's pretty horribly broken, but it appears to be a "True Fans Drupal" "Microfunding Proposal" site. I wouldn't be surprised if it got "blocked" due to being a hacked outdated version of drupal. | |||
## WebFTP didn't let me upload a .ssh dir (it appears to not work with hidden dirs = '.' prefix), but I was able to FTP in (ugh) | |||
## I downloaded the existing .ssh/authorized_keys file, added my key to the end of it, and re-uploaded it | |||
## I was able to successfully ssh-in! | |||
# ok, now that I have access to what I believe to be all the accounts, let's determine what they've got in files | |||
# I found a section of the hetzner konsoleh wui that shows sizes per account (Under Statistics -> Account overview) | |||
## opensourceecology.org 99.6G | |||
## blog.opensourceecology.org 8.71G | |||
## test.opensourceecology.org 4.49G | |||
## forum.opensourceecology.org 1.15G | |||
## civicrm.opensourceecology.org 170M | |||
## ^ all sites display "0G" for "Traffic" | |||
# osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup' | |||
<pre> | |||
osemain@dedi978:~$ date | |||
Fri Jul 6 01:20:41 CEST 2018 | |||
osemain@dedi978:~$ pwd | |||
/usr/home/osemain | |||
osemain@dedi978:~$ whoami | |||
osemain | |||
osemain@dedi978:~$ du -sh * --exclude='noBackup' | |||
983M backups | |||
1.3M bin | |||
4.0K composer.json | |||
36K composer.lock | |||
4.0K cron | |||
4.0K emails.txt | |||
9.8M extensions | |||
16K freemind.sourceforge.net | |||
4.0K id-dsa-iphone.pub | |||
4.0K id_rsa-hetzner | |||
4.0K id_rsa-hetzner.pub | |||
288K installer | |||
0 jboss | |||
470M jboss-4.2.3.GA | |||
4.0K jboss-command-line.txt | |||
234M jdk1.6.0_29 | |||
0 jdk-6 | |||
808K mbkp | |||
0 opensourceecology.org | |||
4.0K passwd.cdb | |||
4.0K PCRE-patch | |||
0 public_html | |||
4.0K uc?id=0B1psBarfpPkzb0JQV1B6Z01teVk | |||
28K users | |||
16K var-run | |||
2.9M vendor | |||
4.0K videos | |||
4.0K wiki_olddocroot | |||
1.1M wrapper-linux-x86-64-3.5.13 | |||
2.6G www_logs | |||
osemain@dedi978:~$ du -sh --exclude='noBackup' | |||
5.7G . | |||
osemain@dedi978:~$ | |||
</pre> | |||
# osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup' | |||
# oseblog has 2.7G | |||
<pre> | |||
oseblog@dedi978:~$ date | |||
Fri Jul 6 02:39:11 CEST 2018 | |||
oseblog@dedi978:~$ pwd | |||
/usr/home/oseblog | |||
oseblog@dedi978:~$ whoami | |||
oseblog | |||
oseblog@dedi978:~$ du -sh * | |||
8.0K bin | |||
0 blog.opensourceecology.org | |||
12K cron | |||
788K mbkp | |||
349M oftblog.dump | |||
4.0K passwd.cdb | |||
0 public_html | |||
208K tmp | |||
104K users | |||
1.2G www_logs | |||
oseblog@dedi978:~$ du -sh | |||
2.7G . | |||
oseblog@dedi978:~$ | |||
</pre> | |||
# osecivi has 44M | |||
<pre> | |||
osecivi@dedi978:~$ date | |||
Fri Jul 6 02:40:19 CEST 2018 | |||
osecivi@dedi978:~$ pwd | |||
/usr/home/osecivi | |||
osecivi@dedi978:~$ whoami | |||
osecivi | |||
osecivi@dedi978:~$ du -sh * | |||
4.0K bin | |||
0 civicrm.opensourceecology.org | |||
4.0K civimail-errors.txt | |||
2.0M CiviMail.ignored-2011 | |||
20K civimail.out | |||
20K cron | |||
2.5M d7-civicrm.dump | |||
828K d7-drupal.dump | |||
788K mbkp | |||
2.2M oftcivi.dump | |||
8.0M oftdrupal.dump | |||
4.0K passwd.cdb | |||
0 public_html | |||
4.0K pw.txt | |||
28K users | |||
3.4M www_logs | |||
osecivi@dedi978:~$ du -sh | |||
44M . | |||
osecivi@dedi978:~$ | |||
</pre> | |||
# oseforum has 1.1G | |||
<pre> | |||
oseforum@dedi978:~$ date | |||
Fri Jul 6 02:41:04 CEST 2018 | |||
oseforum@dedi978:~$ pwd | |||
/usr/home/oseforum | |||
oseforum@dedi978:~$ whoami | |||
oseforum | |||
oseforum@dedi978:~$ du -sh * | |||
8.0K bin | |||
16K cron | |||
0 forum.opensourceecology.org | |||
788K mbkp | |||
7.5M oftforum.dump | |||
4.0K passwd.cdb | |||
0 public_html | |||
102M tmp | |||
14M users | |||
11M vanilla-2.0.18.1 | |||
756M www_logs | |||
oseforum@dedi978:~$ du -sh | |||
1.1G . | |||
oseforum@dedi978:~$ | |||
</pre> | |||
# microft has 1.8G | |||
<pre> | |||
microft@dedi978:~$ date | |||
Fri Jul 6 02:42:00 CEST 2018 | |||
microft@dedi978:~$ pwd | |||
/usr/home/microft | |||
microft@dedi978:~$ whoami | |||
microft | |||
microft@dedi978:~$ du -sh * | |||
8.8M db-backup | |||
3.6M drupal.sql | |||
1.6M drush | |||
44M drush-backups | |||
1.7M git_repos | |||
376M mbkp-wiki-db | |||
18M mediawiki-1.20.2.tar.gz | |||
4.0K passwd.cdb | |||
0 public_html | |||
28K users | |||
1.3G www_logs | |||
microft@dedi978:~$ du -sh | |||
1.8G . | |||
microft@dedi978:~$ | |||
</pre> | |||
# those numbers above are files only. It doesn't include mailboxes or databases. I don't really care about mailboxes (they're probably unused), but I do want to backup databases. | |||
# osemain has 5 databases: | |||
## openswh 7.51M | |||
## ose_fef 3.65M | |||
## ose_website 32M | |||
## osesurv 697K | |||
## osewiki 2.48G | |||
## there doesn't appear to be any DBs for the 'addon' domains under this domain (addontest, holla, irc, opensourcwarehouse, sandbox, survey) | |||
# oseblog has 1 db | |||
## oseblog 1.23G | |||
# osecivi has 2 dbs | |||
## osecivi 31.3M | |||
## osedrupal 8.05M | |||
# oseforum has 1 db | |||
## oseforum 182M | |||
# microft has 3 dbs | |||
## microft_db2 24.3M | |||
## microft_drupal1 33.4M | |||
## microft_wiki 19.5G | |||
# so the total size of file data to backup are 5.7+2.7+0.04+1.1+1.8 = 11.34G | |||
# and the total size of db data to backup is 0.007+0.003+0.03+0.001+2.48+1.23+0.03+0.08+0.1+0.02+0.02+0.03+19.5 = 23.53G | |||
# therefore, the total size of db backups to push to glacier so we can feel safe permanently shutting down hetzner1 is 34.87G | |||
## note that this size will likely be much smaller after compression. |
Revision as of 02:54, 11 July 2018
My work log from the year 2018 Quarter 3. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.
See Also
Fri Jul 06, 2018
- yesterday I calculated that we should backup about ~34.87G of data from hetzner1 to glacier before shutting down the node and terminating its contract
- note that this size will likely be much smaller after compression.
- I confirmed that we have 128G of available space to '/' on hetzner2
[root@hetzner2 ~]# date Fri Jul 6 17:59:12 UTC 2018 [root@hetzner2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 197G 60G 128G 32% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 32G 3.1G 29G 10% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/md1 488M 289M 174M 63% /boot tmpfs 6.3G 0 6.3G 0% /run/user/1005 [root@hetzner2 ~]#
- we also have 165G of available space on '/usr' on hetzner1
osemain@dedi978:~$ date Fri Jul 6 19:59:31 CEST 2018 osemain@dedi978:~$ pwd /usr/home/osemain osemain@dedi978:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/dm-0 9.8G 363M 8.9G 4% / udev 10M 0 10M 0% /dev tmpfs 787M 788K 786M 1% /run /dev/dm-1 322G 142G 165G 47% /usr tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/md0 95M 30M 66M 32% /boot /dev/mapper/vg-tmp 4.8G 308M 4.3G 7% /tmp /dev/mapper/vg-var 20G 2.3G 17G 13% /var tmpfs 2.0G 0 2.0G 0% /var/spool/exim/scan /dev/mapper/vg-vartmp 5.8G 1.8G 3.8G 32% /var/tmp osemain@dedi978:~$
- while it may make sense to do this upload to glacier on hetzern1, I've had hetzner1 terminate my screen sessions randomly in the past. I'd rather do it on hetzner2--where I actually have control over the server with root credentials. Therefore, I think I'll make the compressed tarballs on hetzner1 & scp them to hetzner2. On hetzner2 I'll encrypt the tarballs and create their (also encrypted) corresponding metadata files (listing all the files in the tarballs, for easy/cheaper querying later), and upload both.
- I created a wiki article for this CHG, which will be the canonical URL listed in the metadata files for info on what this data is that I've uploaded to glacier
- I discovered that the DBs on hetzner1 are necessarily accessible to the public Internet (ugh).
- so I _could_ do the mysqldump from hetnzer2, but it's better to do it locally (data tx & sec), and then compress it _before_ sending it to hetzner2
- began backing-up files on osemain
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DBs dbName=openswh dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=ose_fef dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=ose_website dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=osesurv dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=osewiki dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- began backups on oseblog
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DB dbName=oseblog dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- began backups on osecivi
stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DBs dbName=osecivi dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=osedrupal dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- began backup of oseforum
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DB dbName=oseforum dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- began backup of microft
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DBs dbName=microft_db2 dbUser=microft_2 dbPass=CHANGEME time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=microft_drupal1 dbUser=microft_d1u dbPass=CHANGEME time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=microft_wiki dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- after compression (but before encryption), here's the resulting sizes of the backups
- oseforum
oseforum@dedi978:~$ find noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; 57M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 46M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 oseforum@dedi978:~$
- osecivi 16M
osecivi@dedi978:~/noBackup$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; 180K /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2 2.3M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2 12M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2 1.1M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2 osecivi@dedi978:~/noBackup$ <pre> ## oseforum 957M <pre> oseforum@dedi978:~$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; 854M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 46M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 57M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2 oseforum@dedi978:~$
- created a safe dir on hetzner2 to store the files before encrypting & uploading to glacier
[root@hetzner2 tmp]# cd /var/tmp [root@hetzner2 tmp]# mkdir deprecateHetzner1 [root@hetzner2 tmp]# chown root:root deprecateHetzner1/ [root@hetzner2 tmp]# chmod 0700 deprecateHetzner1/ [root@hetzner2 tmp]# ls -lah deprecateHetzner1/ total 8.0K drwx------ 2 root root 4.0K Jul 6 23:14 . drwxrwxrwt. 52 root root 4.0K Jul 6 23:14 .. [root@hetzner2 tmp]#
- ...
- while the backups were running on hetzner2, I began looking into migrating hetzner2's active daily backups to s3.
- I logged into the aws console for the first time in a couple months, and I saw that our first bill was $5.20 in May, $1.08 in June, and $1.08 in July. Not bad, but that's going to go up after we dump all this hetzner1 stuff in glacier & start using s3 for our dailys. In any case, it'll be far, far, far less than the amount we'll be saving by ending our contract for hetzner1!
- I created our first bucket in s3 named 'oseserverbackups'
- important: it was set to "do not grant public read access to this bucket" !
- looks like I already created an IAM user & creds with access to both glacier & s3. I added this to hetzner2:/root/backups/backup.settings
- I installed aws for the root user on hetzner2, added the creds, and confirmed that I could access the new bucket
# create temporary directory tmpdir=`mktemp -d` pushd "$tmpdir" /usr/bin/wget "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" /usr/bin/unzip awscli-bundle.zip ./awscli-bundle/install -b ~/bin/aws # cleanly exit popd /bin/rm -rf "$tmpdir" exit 0
[root@hetzner2 tmp.vbm56CUp50]# aws --version aws-cli/1.15.53 Python/2.7.5 Linux/3.10.0-693.2.2.el7.x86_64 botocore/1.10.52 [root@hetzner2 tmp.vbm56CUp50]# aws s3 ls Unable to locate credentials. You can configure credentials by running "aws configure". [root@hetzner2 tmp.vbm56CUp50]# aws configure AWS Access Key ID [None]: <obfuscated> AWS Secret Access Key [None]: <obfuscated> Default region name [None]: us-west-2 Default output format [None]: [root@hetzner2 tmp.vbm56CUp50]# aws s3 ls 2018-07-07 00:05:22 oseserverbackups [root@hetzner2 tmp.vbm56CUp50]#
# successfully tested an upload to s3 <pre> [root@hetzner2 backups]# cat /var/tmp/test.txt some file destined for s3 this is [root@hetzner2 backups]# aws s3 cp /var/tmp/test.txt s3://oseserverbackups/test.txt upload: ../../var/tmp/test.txt to s3://oseserverbackups/test.txt [root@hetzner2 backups]#
- confirmed that I could see the file in the aws console wui
- clicked the link for the object, and confirmed that I got an AccessDenied error https://s3-us-west-2.amazonaws.com/oseserverbackups/test.txt
- next step: enable lifecycle policy. Ideally, I want to be able to say that files uploaded on the first of the month (either by metadata of the upload timestamp or by regex match on object name) will automatically "freeze" into glaicer after a few days, and all other files will just get deleted automatically after a few days.
- so it looks like we can limit by object name match or by tag. It's probably better if we just have our script add a 'monthly-backup' tag to the object when uploading on the first-of-the-month, then have our lifecycle policy built around that bit https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html
- ugh, TIL s3 objects under the default storage class = STANDARD_IA have a minimum lifetime of 30 days. If you delete an object before 30 days, you're still charged for 30 days https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html
- that means we'll have to store 30 copies of our daily backups at minimum, which are 15G as of now. That's 450G stored to s3 = 0.023 * 450 = $10.35/mo * 12 = $124.2/yr. That sucks.
- per my previous research, we may want to look into using one of these providers instead:
- Backblaze B2 https://www.backblaze.com/b2/cloud-storage.html
- Google Nearline & Coldline https://cloud.google.com/storage/archival/
- Microsoft OneDrive https://onedrive.live.com/about/en-us/
- a quick calculation on the backblaze price calculator (biased, of course) with initial_upload=15G, monthly_upload=450G, monthly_delete=435G, monthly_download=3G gives a cost of $7.11/year. They say that would cost $30.15/yr on s3, $29.88/yr on google, and $26.10 on Microsoft. Well, at least they're wrong in a good way: it would cost more than that in s3. Hopefully they know their own pricing better. $8/year is great for backing-up 15G every day..
Thr Jul 05, 2018
- logged time for last week
- using my ose account, I uploaded the remaining misc photos from my visit to FeF to a new album https://photos.app.goo.gl/YZGTQdWnfFWcJc6p8
- I created a slideshow out of this & added it to the wiki here https://wiki.opensourceecology.org/wiki/Michael_Photo_Folder
- ...
- began revisiting hetzner1. We want to dump all the content onto glaicer before we terminate our contract here.
- I just checked the billing section. Wow, it's 74.79 eur per month. What a rip-off! Hopefully we won't have to pay that much longer..
- because we don't have root, this is more tricky. First, we need to get a list of all the users & investigate what data each has. If the total amount of data is small enough, we can just tar it all up & ship it to glaicer.
- it's not an exact test, but skimming through /etc/passwd suggests that there may be 11 users on hetzner1: osemain, osecivi, oseblog, oseforum, oseirc, oseholla, osesurv, sandbox, microft, zabbix, openswh
- a better test is probably checking which users' shells are /bin/bash
osemain@dedi978:~$ grep '/bin/bash' /etc/passwd root:x:0:0:root:/root:/bin/bash postgres:x:111:113:PostgreSQL administrator,,,:/var/lib/postgresql:/bin/bash osemain:x:1010:1010:opensourceecology.org:/usr/home/osemain:/bin/bash osecivi:x:1014:1014:civicrm.opensourceecology.org:/usr/home/osecivi:/bin/bash oseblog:x:1015:1015:blog.opensourceecology.org:/usr/home/oseblog:/bin/bash oseforum:x:1016:1016:forum.opensourceecology.org:/usr/home/oseforum:/bin/bash osesurv:x:1020:1020:survey.opensourceecology.org:/usr/home/osesurv:/bin/bash microft:x:1022:1022:test.opensourceecology.org:/usr/home/microft:/bin/bash osemain@dedi978:~$
- excluding postgres & root, it looks like 6x users (many of the others are addons, and I think they're under 'osemain') = osemain, osecivi, oseblog, oseforum, osesurv, and microft
osemain@dedi978:~$ ls -lah public_html/archive/addon-domains/ total 32K drwxr-xr-x 8 osemain users 4.0K Jan 18 16:56 . drwxr-xr-x 13 osemain osemain 4.0K Jun 20 2017 .. drwxr-xr-x 2 osemain users 4.0K Jul 26 2011 addontest drwx---r-x 2 osemain users 4.0K Jul 26 2011 holla drwx---r-x 2 osemain users 4.0K Jul 26 2011 irc drwxr-xr-x 2 osemain osemain 4.0K Jan 18 16:59 opensourcewarehouse.org drwxr-xr-x 2 osemain osemain 4.0K Feb 23 2012 sandbox drwxr-xr-x 13 osemain osemain 4.0K Dec 30 2017 survey osemain@dedi978:~$
- I was able to ssh in as osemain, osecivi, oseblog, and oseforum (using my pubkey, so I must have set this up earlier when investigating what I needed to migrate). I was _not_ able to ssh in as 'osesurv' and 'microft'
- on the main page of the konsoleh wui after logging in, there's 5 domains listed: "(blog|civicrm|forum|test).opensourceecology.org" and 'opensourceecology.org'. The one that stands out here is 'test.opensourceecology.org'. Upon clicking on it & digging around, I found that the username for this domain is 'microft'.
- In this test = microft domain (in the konsoleh wui), I tried to click 'WebFTP' (which is how I would upload my ssh key), but I got an erorr "Could not connect to server dedi978.your-server.de:21 with user microft". Indeed, it looks like the account is "suspended"
- to confirm further, I clicked the "FTP" link for the forum account, and confirmed that I could ftp in (ugh) as the user & password supplied by the wui (double-ugh). I tried again using the user/pass from the test=microft domain, and I could not login
- ^ that said, It *does* list it as using 4.49G of disk space + 3 DBs
- the 3 DBs are mysql = microft_db2 (24.3M), microft_drupal1 (29.7M), and microft_wiki (19.4G). Holy shit, 19.4G DB!
- digging into the last db's phpmyadmin, I see a table named "wiki_objectcache" that's 4.2G, "wiki_searchindex" that's 2.7G, and "wiki_text" that's 7.4G. This certainly looks like a Mediawiki DB.
- from the wiki_user table, the last user_id = 1038401 = Traci Clutter, which was created on 20150702040307
- I found that all these accounts are still accessible from a subdomain of our dedi978.your-server.de dns:
- http://blog.opensourceecology.org.dedi978.your-server.de/
- this one gives a 500 internal server error
- http://civicrm.opensourceecology.org.dedi978.your-server.de/
- this one actually loads a drupal page with a login, though the only content is " Welcome to OSE CRM / No front page content has been created yet."
- http://forum.opensourceecology.org.dedi978.your-server.de/
- this one still loads, and appears to be fully functional (ugh)
- http://test.opensourceecology.org.dedi978.your-server.de/
- this gives a 403 forbidden with the comment "You don't have permission to access / on this server." "Server unable to read htaccess file, denying access to be safe"
- http://blog.opensourceecology.org.dedi978.your-server.de/
- In digging through the test.opensourceecology.org domain's settings, I found "Services -> Settings -> Block / Unblock". It (unlike the others) was listed as "Status: Blocked." So I clicked the "Unblock it" button and got "The domain has been successfully unblocked.".
- now WebFTP worked
- this now loads too http://test.opensourceecology.org.dedi978.your-server.de/ ! It's pretty horribly broken, but it appears to be a "True Fans Drupal" "Microfunding Proposal" site. I wouldn't be surprised if it got "blocked" due to being a hacked outdated version of drupal.
- WebFTP didn't let me upload a .ssh dir (it appears to not work with hidden dirs = '.' prefix), but I was able to FTP in (ugh)
- I downloaded the existing .ssh/authorized_keys file, added my key to the end of it, and re-uploaded it
- I was able to successfully ssh-in!
- ok, now that I have access to what I believe to be all the accounts, let's determine what they've got in files
- I found a section of the hetzner konsoleh wui that shows sizes per account (Under Statistics -> Account overview)
- opensourceecology.org 99.6G
- blog.opensourceecology.org 8.71G
- test.opensourceecology.org 4.49G
- forum.opensourceecology.org 1.15G
- civicrm.opensourceecology.org 170M
- ^ all sites display "0G" for "Traffic"
- osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup'
osemain@dedi978:~$ date Fri Jul 6 01:20:41 CEST 2018 osemain@dedi978:~$ pwd /usr/home/osemain osemain@dedi978:~$ whoami osemain osemain@dedi978:~$ du -sh * --exclude='noBackup' 983M backups 1.3M bin 4.0K composer.json 36K composer.lock 4.0K cron 4.0K emails.txt 9.8M extensions 16K freemind.sourceforge.net 4.0K id-dsa-iphone.pub 4.0K id_rsa-hetzner 4.0K id_rsa-hetzner.pub 288K installer 0 jboss 470M jboss-4.2.3.GA 4.0K jboss-command-line.txt 234M jdk1.6.0_29 0 jdk-6 808K mbkp 0 opensourceecology.org 4.0K passwd.cdb 4.0K PCRE-patch 0 public_html 4.0K uc?id=0B1psBarfpPkzb0JQV1B6Z01teVk 28K users 16K var-run 2.9M vendor 4.0K videos 4.0K wiki_olddocroot 1.1M wrapper-linux-x86-64-3.5.13 2.6G www_logs osemain@dedi978:~$ du -sh --exclude='noBackup' 5.7G . osemain@dedi978:~$
- osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup'
- oseblog has 2.7G
oseblog@dedi978:~$ date Fri Jul 6 02:39:11 CEST 2018 oseblog@dedi978:~$ pwd /usr/home/oseblog oseblog@dedi978:~$ whoami oseblog oseblog@dedi978:~$ du -sh * 8.0K bin 0 blog.opensourceecology.org 12K cron 788K mbkp 349M oftblog.dump 4.0K passwd.cdb 0 public_html 208K tmp 104K users 1.2G www_logs oseblog@dedi978:~$ du -sh 2.7G . oseblog@dedi978:~$
- osecivi has 44M
osecivi@dedi978:~$ date Fri Jul 6 02:40:19 CEST 2018 osecivi@dedi978:~$ pwd /usr/home/osecivi osecivi@dedi978:~$ whoami osecivi osecivi@dedi978:~$ du -sh * 4.0K bin 0 civicrm.opensourceecology.org 4.0K civimail-errors.txt 2.0M CiviMail.ignored-2011 20K civimail.out 20K cron 2.5M d7-civicrm.dump 828K d7-drupal.dump 788K mbkp 2.2M oftcivi.dump 8.0M oftdrupal.dump 4.0K passwd.cdb 0 public_html 4.0K pw.txt 28K users 3.4M www_logs osecivi@dedi978:~$ du -sh 44M . osecivi@dedi978:~$
- oseforum has 1.1G
oseforum@dedi978:~$ date Fri Jul 6 02:41:04 CEST 2018 oseforum@dedi978:~$ pwd /usr/home/oseforum oseforum@dedi978:~$ whoami oseforum oseforum@dedi978:~$ du -sh * 8.0K bin 16K cron 0 forum.opensourceecology.org 788K mbkp 7.5M oftforum.dump 4.0K passwd.cdb 0 public_html 102M tmp 14M users 11M vanilla-2.0.18.1 756M www_logs oseforum@dedi978:~$ du -sh 1.1G . oseforum@dedi978:~$
- microft has 1.8G
microft@dedi978:~$ date Fri Jul 6 02:42:00 CEST 2018 microft@dedi978:~$ pwd /usr/home/microft microft@dedi978:~$ whoami microft microft@dedi978:~$ du -sh * 8.8M db-backup 3.6M drupal.sql 1.6M drush 44M drush-backups 1.7M git_repos 376M mbkp-wiki-db 18M mediawiki-1.20.2.tar.gz 4.0K passwd.cdb 0 public_html 28K users 1.3G www_logs microft@dedi978:~$ du -sh 1.8G . microft@dedi978:~$
- those numbers above are files only. It doesn't include mailboxes or databases. I don't really care about mailboxes (they're probably unused), but I do want to backup databases.
- osemain has 5 databases:
- openswh 7.51M
- ose_fef 3.65M
- ose_website 32M
- osesurv 697K
- osewiki 2.48G
- there doesn't appear to be any DBs for the 'addon' domains under this domain (addontest, holla, irc, opensourcwarehouse, sandbox, survey)
- oseblog has 1 db
- oseblog 1.23G
- osecivi has 2 dbs
- osecivi 31.3M
- osedrupal 8.05M
- oseforum has 1 db
- oseforum 182M
- microft has 3 dbs
- microft_db2 24.3M
- microft_drupal1 33.4M
- microft_wiki 19.5G
- so the total size of file data to backup are 5.7+2.7+0.04+1.1+1.8 = 11.34G
- and the total size of db data to backup is 0.007+0.003+0.03+0.001+2.48+1.23+0.03+0.08+0.1+0.02+0.02+0.03+19.5 = 23.53G
- therefore, the total size of db backups to push to glacier so we can feel safe permanently shutting down hetzner1 is 34.87G
- note that this size will likely be much smaller after compression.