Maltfield Log/2018 Q3

From Open Source Ecology
Jump to navigation Jump to search

My work log from the year 2018 Quarter 3. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.

See Also

  1. Maltfield_Log
  2. User:Maltfield
  3. Special:Contributions/Maltfield

Sat Jul 14, 2018

  1. Marcin granted me super admin rights on our Google Suite account (so I can whitelist our IPs for Gmail); I haven't tested this access yet
  2. Marcin mentioned that the STL files I produced for 3d printing parts from the Prusa lacked recessed nut catchers. We compared screenshots and his freecad showed the recesses where the nuts go while mine did not. This is a strange discrepancy which Marcin said should be resolved by everyone running oselinux. I'll have to setup an HVM for OSE Linux. I can't use it for 99% of my daily tasks as it lacks Persistence currently.
  3. I had several back-and-forth emails with Chris about enabling Persistence in OSE Linux. Progress is being made. Code is in github & documented on our wiki here https://wiki.opensourceecology.org/wiki/OSE_Linux_Persistence
  4. Hetzner got back to me about the addon domains document roots. They simply told me to check konsoleH. Indeed, the "Document root" _is_ listed when you click on each addon domain, but it's a useless string. I emailed back to them asking for them to either tell us the aboslute path to each of our 6x addon domains or for them to send me the entire contents of the /etc/httpd directory so I could figure it out myself (again, I don't have root on this old server)
Hi Bastian,

Can you please tell me the absolute path of each of our addon domains?

It looks like we have 6x addon domains under our 'osemain' account. As you suggested, I clicked on each of the domains in konsoleH and checked the string listed under their "Document Root" in konsleH. Here's the results:

	addontest.opensourceecology.org /addon-domains/addontest/
	holla.opensourceecology.org /addon-domains/holla
	irc.opensourceecology.org /addon-domains/irc/
	opensourcewarehouse.org /archive/addon-domains/opensou…
	sandbox.opensourceecology.org /addon-domains/sandbox
	survey.opensourceecology.org /addon-domains/survey/

Unfortunately, none of these paths are absolute paths. Therefore, they are ambiguous.

Assuming they are merely underneath the master account's docroot, I'd assume these document root directories would be relative to '/usr/home/osemain/public_html/'. However, most of these directories do not exist in that directory.

osemain@dedi978:~/public_html$ date
Sat Jul 14 22:54:52 CEST 2018
osemain@dedi978:~/public_html$ pwd
/usr/home/osemain/public_html
osemain@dedi978:~/public_html$ ls -lah
total 32K
drwx---r-x  5 osemain users   4.0K May 25 17:42 .
drwxr-x--x 14 root    root    4.0K Jul 12 14:29 ..
drwxr-xr-x 13 osemain osemain 4.0K Jun 20  2017 archive
-rwxr-xr-x  1 osemain osemain 1.9K Mar  1 20:31 .htaccess
drwxr-xr-x  2 osemain osemain 4.0K Sep 17  2017 logs
drwxr-xr-x 14 osemain osemain 4.0K Mar 31  2015 mediawiki-1.24.2.extra
-rw-r--r--  1 osemain osemain  526 Jun 19  2015 old.html
-rw-r--r--  1 osemain osemain  883 Jun 19  2015 oldu.html.done
osemain@dedi978:~/public_html$

There is no 'addon-domains' directory here. The only directory that matches the "Document root"s extracted from konsoleH as listed above is for 'opensourcewarehouse.org', which is listed as being inside a directory 'archive'. Unfortunately, I can't even see what that directory exactly is. The ellipsis (...) in "/archive/addon-domains/opensou…" is literally in the string that konsoleH gave me.

Can you please provide for me the _absolute_ path of the document roots for 6x the vhosts listed as "addon-domains" listed above? If you could attach the contents of the /etc/httpd directory, that would also be extremely helpful in figuring this information out myself.

Please provide for me the absolute paths of the 6x document roots of the "addon-domains".


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org
  1. Emailed Marcin about All Power Labs, a biomass generator company based in Berkeley & added a wiki article about them https://wiki.opensourceecology.org/wiki/AllPowerLabs

Thr Jul 12, 2018

  1. hetzner got back to me about adding the PTR = RDNS entry. They say I can self-service this request via robot "under the tab IP...click on the small plus symbol beside of the IPv6 subnet."
You set the RDNS entry yourself via robot. You can do it directly at the server under the tab IP. Please click on the small plus symbol beside of the IPv6 subnet.

Best regards

  Ralf Sager
  1. I found it: After logging in, click "Servers" (under the "Main Functions" header on the left), then clock on our server, then click the "IPs" tab (it was the first = default tab). Indeed, there is a very small plus symbol to the left of our ipv6 subnet = " 2a01:4f8:172:209e:: / 64". Clicking on that plus symbol opens a simple form asking for an "IP Address"+ "Reverse DNS entry".
    1. since we have a whole ipv6 subnet, it appears that we can have multiple entries here. I entered "2a01:4f8:172:209e::2" for the ip address (as this was what google reported to us) and "opensourceecology.org" for the "Reverse DNS entry".
    2. interestingly, there were no RDNS entries for the ipv4 addresses above. I set those to 'opensourceecology.org' as well.
    3. it worked immediately!
user@personal:~$ dig +short -x "2a01:4f8:172:209e::2"
opensourceecology.org.
user@personal:~$ 
    1. I emailed Ralf at hetzner back, asking if this self-servicability of setting the RDNS = PTR address was just as trivial for hetzner cloud nodes as it is for hetzner dedicated barmetal servers
  1. here's the whole PTR = RDS response using dig on our ipv6 address
user@personal:~$ dig -x "2a01:4f8:172:209e::2"

; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 2a01:4f8:172:209e::2
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7215
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 7

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. IN PTR

;; ANSWER SECTION:
2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. 5937 IN PTR opensourceecology.org.

;; AUTHORITY SECTION:
8.f.4.0.1.0.a.2.ip6.arpa. 5937	IN	NS	ns1.your-server.de.
8.f.4.0.1.0.a.2.ip6.arpa. 5937	IN	NS	ns.second-ns.com.
8.f.4.0.1.0.a.2.ip6.arpa. 5937	IN	NS	ns3.second-ns.de.

;; ADDITIONAL SECTION:
ns.second-ns.com.	5241	IN	A	213.239.204.242
ns.second-ns.com.	111071	IN	AAAA	2a01:4f8:0:a101::b:1
ns1.your-server.de.	84441	IN	A	213.133.106.251
ns1.your-server.de.	24671	IN	AAAA	2a01:4f8:d0a:2006::2
ns3.second-ns.de.	24672	IN	A	193.47.99.4
ns3.second-ns.de.	24671	IN	AAAA	2001:67c:192c::add:b3

;; Query time: 3 msec
;; SERVER: 10.137.2.1#53(10.137.2.1)
;; WHEN: Thu Jul 12 13:51:54 EDT 2018
;; MSG SIZE  rcvd: 358

user@personal:~$ 
  1. ...
  1. hetzner got back to me about the public_html directory being "permission denied" for the 'osesurv' user. They said that the document root is in the main user's public_html dir. I asked for them to tell me the absolute path to this dir, as I cannot check the apache config without root.
Dear Mr Altfield

all website data is always saved in the main account. Addon domains only use files from the main domains public_html folder.

If we can be of any further assistance, please let us know.


Mit freundlichen Grüßen / Kind regards

Jan Barnewold

Wed Jul 11, 2018

  1. hetzner got back to me, stating that I should go to "Services -> Login" in order to access the home directory of the 'osesurv' account (at /usr/home/osesurv)
Dear Mr Altfield

every addon domain has it's own home directory. The login details can be found under Services -> Login.

If you have any further questions, please feel free to contact me.
  1. I navigated to the addon domain in the hetzner wui konsoleh & to Services -> Login. I got a username & password. This let me ssh into the server as the user!
osesurv@dedi978:~$ date
Wed Jul 11 18:32:36 CEST 2018
osesurv@dedi978:~$ pwd
/usr/home/osesurv
osesurv@dedi978:~$ whoami
osesurv
osesurv@dedi978:~$ ls -lah
total 80K
drwx--x---  5 osesurv mail    4.0K Sep 21  2011 .
drwxr-x--x 14 root    root    4.0K Mar  9  2013 ..
-rw-r--r--  1 osesurv osesurv  220 Apr 10  2010 .bash_logout
-rw-r--r--  1 osesurv osesurv 3.2K Apr 10  2010 .bashrc
-rw-r--r--  1 osesurv osesurv   40 Sep 21  2011 .forward
-rw-r-----  1 osesurv mail    2.2K Sep 21  2011 passwd.cdb
-rw-r--r--  1 osesurv osesurv  675 Apr 10  2010 .profile
lrwxrwxrwx  1 root    root      23 Sep 21  2011 public_html -> ../../www/users/osesurv
-rw-r--r--  1 osesurv osesurv   40 Sep 21  2011 .qmail
-rw-r--r--  1 osesurv osesurv   25 Sep 21  2011 .qmail-default
drwxr-x---  2 osesurv osesurv 4.0K Sep 21  2011 .tmp
drwxrwxr-x  3 osesurv mail    4.0K Sep 21  2011 users
drwxr-xr-x  2 osesurv osesurv  36K Mar  6  2014 www_logs
osesurv@dedi978:~$ 
  1. none of these addon domains can have databases (I think), but it appears that I need to get all their home & web files
  2. began backing-up files on addon domain = osesurv
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*
  1. so ^ that failed. The home dir was accessible, but I'm getting a permission denied issue with the www dir linked to by public_html.
osesurv@dedi978:~$ # backup web root files
osesurv@dedi978:~$ time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*
tar: Removing leading `/' from member names
tar: /usr/www/users/osesurv/*: Cannot stat: Permission denied
tar: Exiting with failure status due to previous errors

real	0m0.013s
user	0m0.004s
sys	0m0.000s
osesurv@dedi978:~$ 
  1. I emailed hetzner back about this, asking how I can access this user's www dir
Hi Jan,

Thank you. I was able to ssh in, and I was able to access the user's
home directory. But I cannot access the user's www directory.

user@ose:~$ ssh osesurv@dedi978.your-server.de
osesurv@dedi978.your-server.de's password:
Last login: Wed Jul 11 18:30:46 2018 from 108.160.67.63
osesurv@dedi978:~$ date
Wed Jul 11 18:58:36 CEST 2018
osesurv@dedi978:~$ pwd
/usr/home/osesurv
osesurv@dedi978:~$ whoami
osesurv
osesurv@dedi978:~$ ls
noBackup  passwd.cdb  public_html  users  www_logs
osesurv@dedi978:~$ ls -lah public_html
lrwxrwxrwx 1 root root 23 Sep 21  2011 public_html ->
../../www/users/osesurv
osesurv@dedi978:~$ ls -lah public_html/
ls: cannot open directory public_html/: Permission denied
osesurv@dedi978:~$ ls -lah ../../www/users/osesurv
ls: cannot open directory ../../www/users/osesurv: Permission denied
osesurv@dedi978:~$ ls -lah /usr/www/users/osesurv/
ls: cannot open directory /usr/www/users/osesurv/: Permission denied
osesurv@dedi978:~$

Note that I also cannot access this directory from the 'osemain' user
under which the addon domain 'osesurv' exists:

osemain@dedi978:~$ date
Wed Jul 11 19:02:03 CEST 2018
osemain@dedi978:~$ whoami
osemain
osemain@dedi978:~$ ls -lah /usr/www/users/osesurv
ls: cannot access /usr/www/users/osesurv/.: Permission denied
ls: cannot access /usr/www/users/osesurv/..: Permission denied
total 0
d????????? ? ? ? ?            ? .
d????????? ? ? ? ?            ? ..
osemain@dedi978:~$

Can you please tell me how I can access the files in
'/usr/www/users/osesurv/'? Is it possible to do so over ssh?


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org
  1. ...
  1. I went to check to see if the PTR dns entry was in-place for a reverse lookup of our ipv6 address that I created yesterday. Unfortunately, there's no change
user@personal:~$ dig +short -x 138.201.84.243
static.243.84.201.138.clients.your-server.de.
user@personal:~$ dig +short -x "2a01:4f8:172:209e::2"
user@personal:~$ 
  1. here's the long results
user@personal:~$ date
Wed Jul 11 13:16:16 EDT 2018
user@personal:~$ dig -x 138.201.84.243

; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 138.201.84.243
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35146
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 7

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;243.84.201.138.in-addr.arpa.	IN	PTR

;; ANSWER SECTION:
243.84.201.138.in-addr.arpa. 86108 IN	PTR	static.243.84.201.138.clients.your-server.de.

;; AUTHORITY SECTION:
84.201.138.in-addr.arpa. 86108	IN	NS	ns3.second-ns.de.
84.201.138.in-addr.arpa. 86108	IN	NS	ns1.your-server.de.
84.201.138.in-addr.arpa. 86108	IN	NS	ns.second-ns.com.

;; ADDITIONAL SECTION:
ns.second-ns.com.	4381	IN	A	213.239.204.242
ns.second-ns.com.	169981	IN	AAAA	2a01:4f8:0:a101::b:1
ns1.your-server.de.	83581	IN	A	213.133.106.251
ns1.your-server.de.	83581	IN	AAAA	2a01:4f8:d0a:2006::2
ns3.second-ns.de.	83581	IN	A	193.47.99.4
ns3.second-ns.de.	83581	IN	AAAA	2001:67c:192c::add:b3

;; Query time: 6 msec
;; SERVER: 10.137.2.1#53(10.137.2.1)
;; WHEN: Wed Jul 11 13:16:22 EDT 2018
;; MSG SIZE  rcvd: 322

user@personal:~$ dig -x "2a01:4f8:172:209e::2"

; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 2a01:4f8:172:209e::2
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57144
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. IN PTR

;; AUTHORITY SECTION:
8.f.4.0.1.0.a.2.ip6.arpa. 6890	IN	SOA	ns1.your-server.de. postmaster.your-server.de. 2018084081 86400 1800 3600000 86400

;; Query time: 3 msec
;; SERVER: 10.137.2.1#53(10.137.2.1)
;; WHEN: Wed Jul 11 13:16:28 EDT 2018
;; MSG SIZE  rcvd: 166

user@personal:~$ 
  1. if we encounter these errors again, I think we'll have to contact hetzner to create these PTR entries for the ipv6 addresses. I don't think I have the ability to do this from our server or from our nameserver at cloudflare
  2. hetzner has an article on this issue, but they merely state to contact their support team https://hetzner.co.za/help-centre/domains/ptr/
  3. I went ahead and contacted hetzner (via our robot portal for hetnzer2--distinct from hetzner1's konsoleh) asking them to create the PTR record for our ipv6 addresses. And I asked them if this is something I could do myself or if it necessarily required a change on their end.
  4. note that this may not be a serviceable request for some types of accounts at hetzner, and it is a valid concern for moving from a dedicated barmetal server to other types of accounts, such as a cloud server. I documented this concern in the "looking forward" section on the OSE Server article https://wiki.opensourceecology.org/wiki/OSE_Server#Non-dedicated_baremetal_concerns
  1. ...
  1. while I wait for hetzner support's response for how to access all the files for the addon domains, I'll copy the finished backups from the other 5x domains (as opposed to addon domains) to hetnzer2 (osemain, osecivi, oseblog, oseforum, and microft), staging them for upload to glacier
    1. osemain's backups (after compression) came to a total of 3.3G
osemain@dedi978:~$ date
Wed Jul 11 19:42:29 CEST 2018
osemain@dedi978:~$ pwd
/usr/home/osemain
osemain@dedi978:~$ whoami
osemain
osemain@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/*
2.9G	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_home.tar.bz2
1.2M	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-openswh.20180706-224656.sql.bz2
192K	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_fef.20180706-224656.sql.bz2
164K	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osesurv.20180706-224656.sql.bz2
4.0K	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_website.20180706-224656.sql.bz2
204M	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osewiki.20180706-224656.sql.bz2
212M	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_webroot.tar.bz2
osemain@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/
3.3G	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/
osemain@dedi978:~$ 
    1. osecivi's backups (after compression) came to a total of 15M
osecivi@dedi978:~$ date
Wed Jul 11 19:49:44 CEST 2018
osecivi@dedi978:~$ pwd
/usr/home/osecivi
osecivi@dedi978:~$ whoami
osecivi
osecivi@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/*
2.3M    noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2
1.1M    noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2
180K    noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2
12M     noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2
osecivi@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/
15M     noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/
osecivi@dedi978:~$ 
    1. oseblog's backups (after compression) came to a total of 4.4G
oseblog@dedi978:~$ date
Wed Jul 11 19:58:51 CEST 2018
oseblog@dedi978:~$ pwd
/usr/home/oseblog
oseblog@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/*
1.3G    noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_home.tar.bz2
135M    noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_mysqldump-oseblog.20180706-234052.sql.bz2
3.1G    noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_webroot.tar.bz2
oseblog@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/
4.4G    noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/
oseblog@dedi978:~$ 
    1. oseforum's backups (after compression) came to a total of 956M
oseforum@dedi978:~$ date
Wed Jul 11 20:02:04 CEST 2018
oseforum@dedi978:~$ pwd
/usr/home/oseforum
oseforum@dedi978:~$ whoami
oseforum
oseforum@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/*
854M	noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
46M	noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
57M	noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2
oseforum@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/
956M	noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/
oseforum@dedi978:~$ 
    1. microft's backups (after compression) came to a total of 6.2G
microft@dedi978:~$ date
Wed Jul 11 20:06:19 CEST 2018
microft@dedi978:~$ pwd
/usr/home/microft
microft@dedi978:~$ whoami
microft
microft@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/*
1.4G	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_home.tar.bz2
528K	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_db2.20180706-234228.sql.bz2
1.3M	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_drupal1.20180706-234228.sql.bz2
3.3G	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_wiki.20180706-234228.sql.bz2
1.7G	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_webroot.tar.bz2
microft@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/
6.2G	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/
microft@dedi978:~$ 
  1. therefore, the total for the 5x domains (exculding addon domains) dropped from ~34.87G before compression to 14.871G after compression
    1. that's a totally reasonable size to backup. In fact, I think I'll leave some of these backups live on hetzner2. I should definitely do so for the forum, in-case we ever want to make that site rw again.
    2. I went ahead and created the "hot" backup of the oseforums in the cooresponding apache dir
[root@hetzner2 forum.opensourceecology.org]# date
Wed Jul 11 19:16:55 UTC 2018
[root@hetzner2 forum.opensourceecology.org]# pwd
/var/www/html/forum.opensourceecology.org
[root@hetzner2 forum.opensourceecology.org]# du -sh *
955M    final_backup_before_hetzner1_deprecation_oseforum_20180706-230007
2.7G    htdocs
4.0K    readme.txt
173M    vanilla_docroot_backup.20180113
[root@hetzner2 forum.opensourceecology.org]# du -sh final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/*
853M    final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
46M     final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
57M     final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2
[root@hetzner2 forum.opensourceecology.org]# 
    1. I created a readme.txt explaining what happened for the future sysadmin
[root@hetzner2 forum.opensourceecology.org]# cat readme.txt 
In 2018, the forums were no longer moderated or maintained, and the decision was made to deprecate support for the site. The content is still accessible in as static-content; new content is not possible.

For more information, please see:

 * https://wiki.opensourceecology.org/wiki/CHG-2018-02-04

On 2018-07-11, during the backup stage of the change to deprecate hetzner1, a backup of the vanilla forums home directory, webroot directory, and database dump were created for upload to long-term backup storage on hetzner1. Because this backup size was manageable small (1G, which is actually smaller than the 2.7G of static content currently live in the forum's docroot), I put a "hot" copy of this dump in the forum's apache dir (but outside the htdocs root, of course) located at hetzner2:/var/www/html/forum.opensourceecology.org/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/

 * https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation

-- Michael Altfield <michael@opensourceecology.org> 2018-07-11
[root@hetzner2 forum.opensourceecology.org]# 
    1. and, finally, I updated the relevant wiki articles for the forums
      1. https://wiki.opensourceecology.org/wiki/CHG-2018-02-04
      2. https://wiki.opensourceecology.org/wiki/Vanilla_Forums
      3. https://wiki.opensourceecology.org/wiki/OSE_Forum
  1. I scp'd all these tarballs to hetzner2
[root@hetzner2 deprecateHetzner1]# date
Wed Jul 11 18:17:14 UTC 2018
[root@hetzner2 deprecateHetzner1]# pwd
/var/tmp/deprecateHetzner1
[root@hetzner2 deprecateHetzner1]# du -sh /var/tmp/deprecateHetzner1/*
6.2G    /var/tmp/deprecateHetzner1/microft
4.4G    /var/tmp/deprecateHetzner1/oseblog
15M     /var/tmp/deprecateHetzner1/osecivi
955M    /var/tmp/deprecateHetzner1/oseforum
3.3G    /var/tmp/deprecateHetzner1/osemain
[root@hetzner2 deprecateHetzner1]# du -sh /var/tmp/deprecateHetzner1/
15G     /var/tmp/deprecateHetzner1/
[root@hetzner2 deprecateHetzner1]# 
  1. I still need to generate the metadata files that explains what these tarballs hold with a message + file list (`tar -t`). This will also hopefully serve as a test to validate that the files were not corrupted in-transit during the scp or tar creation. I generally prefer rsync so I can double-tap, but I has some issues with ssh key auth with rsync (so I just used scp, which auth'd fine).
  2. of course, I also still need to generate the backup tarballs for the addon domains after hetzner gets back to me on how to access their web roots.
  1. ...
  1. I began looking back at the hancock:/home/marcin_ose/backups/uploadToGlacier.sh file that I used back in March to generate metadata files for each of the encrypted tarballs dumped onto glacier https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q1#Sat_Mar_31.2C_2018
hancock% cat uploadToGlacier.sh 
#!/bin/bash -x

############
# SETTINGS #
############

#backupDirs="hetzner2/20171101-072001"
#backupDirs="hetzner1/20170901-052001"
#backupDirs="hetzner1/20171001-052001"
#backupDirs="hetzner1/20171101-062001 hetzner1/20171201-062001"
#backupDirs="hetzner1/20171201-062001"
backupDirs="hetzner2/20170702-052001 hetzner2/20170801-072001 hetzner2/20170901-072001 hetzner2/20171001-072001 hetzner2/20171101-072001 hetzner2/20171202-072001 hetzner2/20180102-072001 hetzner2/20180202-072001 hetzner2/20180302-072001 hetzner2/20180401-072001 hetzner1/20170701-052001 hetzner1/20170801-052001 hetzner1/20180101-062001 hetzner1/20180201-062001 hetzner1/20180301-062002 hetzner1/20180401-052001"
syncDir="/home/marcin_ose/backups/uploadToGlacier"
encryptionKeyFilePath="/home/marcin_ose/backups/ose-backups-cron.key"

export AWS_ACCESS_KEY_ID='<obfuscated>'
export AWS_SECRET_ACCESS_KEY='<obfuscated>'

##############
# DO UPLOADS #
##############

for dir in $(echo $backupDirs); do

	archiveName=`echo ${dir} | tr '/' '_'`;
	timestamp=`date -u --rfc-3339=seconds`
	fileListFilePath="${syncDir}/${archiveName}.fileList.txt"
	archiveFilePath="${syncDir}/${archiveName}.tar"

	#########################
	# archive metadata file #
	#########################
	
	# first, generate a file list to help the future sysadmin get metadata about the archvie without having to download the huge archive itself
	echo "================================================================================" > "${fileListFilePath}"
	echo "This file is metadata for the archive '${archiveName}'. In it, we list all the files included in the compressed/encrypted archive (produced using 'ls -lahR ${dir}'), including the files within the tarballs within the archive (produced using 'find "${dir}" -type f -exec tar -tvf '{}' \; ')" >> "${fileListFilePath}"
	echo "" >> "${fileListFilePath}"
	echo " - Michael Altfield <maltfield@opensourceecology.org>" >> "${fileListFilePath}"
	echo "" >> "${fileListFilePath}"
	echo " Note: this file was generated at ${timestamp}" >> "${fileListFilePath}"
	echo "================================================================================" >> "${fileListFilePath}"
	echo "#############################" >> "${fileListFilePath}"
	echo "# 'ls -lahR' output follows #" >> "${fileListFilePath}"
	echo "#############################" >> "${fileListFilePath}"
	ls -lahR ${dir} >> "${fileListFilePath}"
	echo "================================================================================" >> "${fileListFilePath}"
	echo "############################" >> "${fileListFilePath}"
	echo "# tarball contents follows #" >> "${fileListFilePath}"
	echo "############################" >> "${fileListFilePath}"
	find "${dir}" -type f -exec tar -tvf '{}' \; >> "${fileListFilePath}"
	echo "================================================================================" >> "${fileListFilePath}"

	# compress the metadata file
	bzip2 "${fileListFilePath}"

	# encrypt the metadata file
	#gpg --symmetric --cipher-algo aes --armor --passphrase-file "${encryptionKeyFilePath}" "${fileListFilePath}.bz2"
	gpg --symmetric --cipher-algo aes --passphrase-file "${encryptionKeyFilePath}" "${fileListFilePath}.bz2"

	# delete the unencrypted archive
	rm "${fileListFilePath}"

	# upload it
	#/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}_metadata.txt.bz2.asc: this is a metadata file showing the file and dir list contents of the archive of the same name" --body "${fileListFilePath}.bz2.asc"
	#/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}_metadata.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same name" --body "${fileListFilePath}.bz2.gpg"
	/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 "${fileListFilePath}.bz2.gpg"

	if  $? -eq 0 ; then
		rm -f "${fileListFilePath}.bz2.gpg"
	fi

	################
	# archive file #
	################

	# generate archive file as a single, compressed file
	tar -cvf "${archiveFilePath}" "${dir}/"

	# encrypt the archive file
	#gpg --symmetric --cipher-algo aes --armor --passphrase-file "${encryptionKeyFilePath}" "${archiveFilePath}"
	gpg --symmetric --cipher-algo aes --passphrase-file "${encryptionKeyFilePath}" "${archiveFilePath}"

	# delete the unencrypted archive
	rm "${archiveFilePath}"

	# upload it
	#/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}.tar.gz.asc: this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates" --body "${archiveFilePath}.asc"
	#/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}.tar.gz.gpg: this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates" --body "${archiveFilePath}.gpg"
	/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 "${archiveFilePath}.gpg"

	if  $? -eq 0 ; then
		rm -f "${archiveFilePath}.gpg"
	fi

done
hancock% 

Tue Jul 10, 2018

  1. hetzner got back to me as expected stating that it's an addon domain. It's hard to convey via email (plus through a language barrier) that I'm aware of the fact that there's an addon domain of the same name = survey. But there is a distinct directory & user unlike other addon domains on the physical server that I cannot access. I'm assuming it was previously used as a non-addon domain, then an addon domain was created. Or something. In any case, there is an actual directory '/usr/home/osesurv' that I need to access. I replied to them asking for an `ls -lah /usr/home/osesurv` to be sent to me.
  2. Marcin forwarded an error report from google's webmaster tools. It showed 1 issues; I'm not concerned. It has a lot of false-positives (special pages, robots, etc)
  3. Marcin sent me emails about two users who have not received emails (containing their temp password) after registering for an account on the wiki
    1. Miles Ransaw <milesransaw@gmail.com>
    2. Harman Bains <bains.hmn@gmail.com>
    3. this is extremely frustrating, as it appears that mediawiki does send emails most of the time, but occasionally users complain that emails do not come in (even after checking the spam folder). I can't find a way to reproduce this issue, so what I really need to do is find some logs containing the user's names/emails above.
    4. did some digging around the source code & confirmed that mediawiki falls-back on using the mail() function of php in includes/mail/UserMailer.php:sendInternal(). It looks like it also has support to use the PEAR mailer as well. There's a debug line that indicates when it has to fall-back to mail().
	 if ( !stream_resolve_include_path( 'Mail/mime.php' ) ) {                                                                                         
			wfDebug( "PEAR Mail_Mime package is not installed. Falling back to text email.\n" );
  1. checking with `pear list`, it looks like we don't have PEAR Mail_Mime installed
[root@hetzner2 ~]# pear list
...
INSTALLED PACKAGES, CHANNEL PEAR.PHP.NET:
=========================================
PACKAGE          VERSION STATE
Archive_Tar      1.4.2   stable
Console_Getopt   1.4.1   stable
PEAR             1.10.4  stable
Structures_Graph 1.1.1   stable
XML_Util         1.4.2   stable
[root@hetzner2 ~]# 
  1. I even checked to see if it was a file within the "PEAR" package. It isn't there.
[root@hetzner2 ~]# pear list-files PEAR | grep -i mail
PHP Warning:  ini_set() has been disabled for security reasons in /usr/share/pear/pearcmd.php on line 30
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
[root@hetzner2 ~]# pear list-files PEAR | grep -i mime
PHP Warning:  ini_set() has been disabled for security reasons in /usr/share/pear/pearcmd.php on line 30
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
[root@hetzner2 ~]# 
  1. compare this to our old server, and we see a discrepancy! The old server has this module. Perhaps this is the issue?
osemain@dedi978:~$ pear list
Installed packages, channel pear.php.net:
=========================================
Package          Version State
Archive_Tar      1.4.3   stable
Console_Getopt   1.4.1   stable
DB               1.7.14  stable
Date             1.4.7   stable
File             1.3.0   stable
HTTP             1.4.1   stable
HTTP_Request     1.4.4   stable
Log              1.12.8  stable
MDB2             2.5.0b5 beta
Mail             1.2.0   stable
Mail_Mime        1.8.9   stable
Mail_mimeDecode  1.5.5   stable
Net_DIME         1.0.2   stable
Net_IPv4         1.3.4   stable
Net_SMTP         1.6.2   stable
Net_Socket       1.0.14  stable
Net_URL          1.0.15  stable
PEAR             1.10.5  stable
SOAP             0.13.0  beta
Structures_Graph 1.1.1   stable
XML_Parser       1.3.4   stable
XML_Util         1.4.2   stable
osemain@dedi978:~$ 
  1. a quick yum search shows the package (we don't fucking want to use the pear package manager)
[root@hetzner2 ~]# yum search pear | grep -i mail
php-channel-swift.noarch : Adds swift mailer project channel to PEAR
php-pear-Mail.noarch : Class that provides multiple interfaces for sending
					 : emails
php-pear-Mail-Mime.noarch : Classes to create MIME messages
php-pear-Mail-mimeDecode.noarch : Class to decode mime messages
[root@hetzner2 ~]# 
  1. so I _could_ install this, but I really want to develop some test that proves it doesn't work, then install. Then re-test & confirm it's fixed.
  2. it looks like we can trigger Mediawiki sending a user an email via this Special:EmailUser page https://wiki.opensourceecology.org/index.php?title=Special:EmailUser/
    1. well, I did that, and the email went through. It came from 'contact@opensourceecology.org'
    2. unfortunately, it looks like wiki-error.log hasn't populated since Jun 18th. I was monitoring the other logs when I triggered the EmailUser above; it was shown in access_log, and nothing came into error_log
  3. I changed the permissions of this wiki-error.log file to "not-apache:apache-admins" and 0660, and it began writing
    1. hopefully this won't fill the disk! iirc, mediawiki has some mechanism to prevent infinite growth..
  4. I re-triggered the email, but (surprisingly), I saw very little Mail-related info in the wiki-error.log file, except
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org>
Sending mail via internal mail() function
MediaWiki::preOutputCommit: primary transaction round committed
MediaWiki::preOutputCommit: pre-send deferred updates completed
MediaWiki::preOutputCommit: LBFactory shutdown completed
User: loading options for user 3709 from override cache.
OutputPage::sendCacheControl: private caching;  **
[error] [W0U@lnhB25P5rXweNx8gYQAAAAs] /wiki/Special:EmailUser   ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons
  1. that "ini_set() has been disabled for security reasons" occurs all the time; it shouldn't be an issue. Indeed, the email came through. What I was expecting to see was the "PEAR Mail_Mime package is not installed. Falling back to text email" message. It didn't appear.
  2. I opened the wiki-error.log file in vim, and then I did find this:
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org>                                                                                                                             
Sending mail via internal mail() function
  1. whatever, different logic location. that's a good enough test. let me try to install the pear module, retry the email send. If everything still works, I'll ask the users to try again. Maybe that will just fix it. In any case, it appears that having pear may make it easier to debug.
[root@hetzner2 wiki.opensourceecology.org]# yum install php-pear-Mail-Mime
...
Installed:
  php-pear-Mail-Mime.noarch 0:1.10.2-1.el7                                                                                                                                                              

Complete!
[root@hetzner2 wiki.opensourceecology.org]#
  1. I re-triggered the email to send. It came in, and the log still says it's using the 'mail() function
[root@hetzner2 wiki.opensourceecology.org]# tail -f wiki-error.log  | grep -C3 -i mail

[DBConnection] Closing connection to database 'localhost'.
[DBConnection] Closing connection to database 'localhost'.
IP: 127.0.0.1
Start request POST /wiki/Special:EmailUser
HTTP HEADERS:
X-REAL-IP: 104.51.202.137
X-FORWARDED-PROTO: https
--
ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
ACCEPT-LANGUAGE: en-US,en;q=0.8
ACCEPT-ENCODING: gzip, deflate, br
REFERER: https://wiki.opensourceecology.org/wiki/Special:EmailUser
COOKIE: donot=cacheme; osewiki_db_wiki__session=tglqeps7foc00ah128mjrap1la56qdlj; osewiki_db_wiki_UserID=3709; osewiki_db_wiki_UserName=Maltfield
DNT: 1
UPGRADE-INSECURE-REQUESTS: 1
--
	"ChronologyProtection": false
}
[DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection.
[error] [W0VBqahT@xRKhqoY@veLxgAAAAU] /wiki/Special:EmailUser   ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons
#0 [internal function]: MWExceptionHandler::handleError(integer, string, string, integer, array)
#1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(693): ini_set(string, string)
#2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(129): Wikimedia\Rdbms\Database->installErrorHandler()
--
#10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(777): wfGetDB(integer)
#11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(396): User::idFromName(string, integer)
#12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(2230): User->load()
#13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(223): User->getId()
#14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(205): SpecialEmailUser::validateTarget(User, User)
#15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(47): SpecialEmailUser::getTarget(string, User)
#16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPage.php(488): SpecialEmailUser->getDescription()
#17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(116): SpecialPage->setHeaders()
#18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPage.php(522): SpecialEmailUser->execute(NULL)
#19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPageFactory.php(578): SpecialPage->run(NULL)
#20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(287): SpecialPageFactory::executePath(Title, RequestContext)
#21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(851): MediaWiki->performRequest()
--
User::getBlockedStatus: checking...
User: loading options for user 3709 from override cache.
User: loading options for user 3709 from override cache.
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org>
Sending mail via internal mail() function
MediaWiki::preOutputCommit: primary transaction round committed
MediaWiki::preOutputCommit: pre-send deferred updates completed
MediaWiki::preOutputCommit: LBFactory shutdown completed
User: loading options for user 3709 from override cache.
OutputPage::sendCacheControl: private caching;  **
[error] [W0VBqahT@xRKhqoY@veLxgAAAAU] /wiki/Special:EmailUser   ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons
#0 [internal function]: MWExceptionHandler::handleError(integer, string, string, integer, array)
#1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(693): ini_set(string, string)
#2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(129): Wikimedia\Rdbms\Database->installErrorHandler()
  1. I'm tired of these errors; I commented out line 693 in includes/libs/rdbms/database/Database.php:installErrorHandler()
   /**                                                                                                                                                                                                  
	* Set a custom error handler for logging errors during database connection                                                                                                                          
	*/                                                                                                                                                                                                  
   protected function installErrorHandler() {                                                                                                                                                           
	  $this->mPHPError = false;                                                                                                                                                                         
	  #$this->htmlErrors = ini_set( 'html_errors', '0' );                                                                                                                                               
	  set_error_handler( [ $this, 'connectionErrorLogger' ] );
}     
  1. that cleans up the output at least
[root@hetzner2 wiki.opensourceecology.org]# tail -f wiki-error.log  | grep -C3 -i mail

[DBConnection] Closing connection to database 'localhost'.
[DBConnection] Closing connection to database 'localhost'.
IP: 127.0.0.1
Start request POST /wiki/Special:EmailUser
HTTP HEADERS:
X-REAL-IP: 104.51.202.137
X-FORWARDED-PROTO: https
--
ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
ACCEPT-LANGUAGE: en-US,en;q=0.8
ACCEPT-ENCODING: gzip, deflate, br
REFERER: https://wiki.opensourceecology.org/wiki/Special:EmailUser
COOKIE: donot=cacheme; osewiki_db_wiki__session=tglqeps7foc00ah128mjrap1la56qdlj; osewiki_db_wiki_UserID=3709; osewiki_db_wiki_UserName=Maltfield
DNT: 1
UPGRADE-INSECURE-REQUESTS: 1
--
User::getBlockedStatus: checking...
User: loading options for user 3709 from override cache.
User: loading options for user 3709 from override cache.
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org>
Sending mail via internal mail() function
MediaWiki::preOutputCommit: primary transaction round committed
MediaWiki::preOutputCommit: pre-send deferred updates completed
MediaWiki::preOutputCommit: LBFactory shutdown completed
  1. curiously, as a test per this article, I wrote a simple test.php script to mail() myself something; it failed https://www.mediawiki.org/wiki/Manual:$wgEnableEmail
[root@hetzner2 mail]# cat /var/www/html/wiki.opensourceecology.org/htdocs/test.php 
<?php
# This is just a test to debug email issues; please delete this file
# --Michael Altfield <michael@opensourceecology.org> 2018-07-10

# we set a cookie to prevent varnish from caching this page
header( "Set-Cookie: donot=cacheme" );

mail( "michael@opensourceecology.org", "my subject", "my message body" );
?>
[root@hetzner2 mail]# 
  1. while tailing the maillog, I see this when I trigger my test script
[root@hetzner2 mail]# tail -f /var/log/maillog
Jul 10 23:41:16 hetzner2 postfix/pickup[11033]: DCFA7681EA4: uid=48 from=<apache>
Jul 10 23:41:16 hetzner2 postfix/cleanup[23835]: DCFA7681EA4: message-id=<20180710234116.DCFA7681EA4@hetzner2.opensourceecology.org>
Jul 10 23:41:16 hetzner2 postfix/qmgr[1631]: DCFA7681EA4: from=<apache@hetzner2.opensourceecology.org>, size=412, nrcpt=1 (queue active)
Jul 10 23:41:17 hetzner2 postfix/smtp[23837]: DCFA7681EA4: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[108.177.119.27]:25, delay=0.27, delays=0.02/0/0.05/0.2, dsn=2.0.0, status=sent (250 2.0.0 OK 1531266077 s19-v6si3778539edc.383 - gsmtp)
Jul 10 23:41:17 hetzner2 postfix/qmgr[1631]: DCFA7681EA4: removed
  1. but I see this when I load the mediawiki EmailUser page
Jul 10 23:42:06 hetzner2 postfix/pickup[11033]: 43A7F681EA4: uid=48 from=<contact@opensourceecology.org>
Jul 10 23:42:06 hetzner2 postfix/cleanup[23835]: 43A7F681EA4: message-id=<osewiki_db-wiki_.5b45444e401807.54864375@wiki.opensourceecology.org>
Jul 10 23:42:06 hetzner2 postfix/qmgr[1631]: 43A7F681EA4: from=<contact@opensourceecology.org>, size=983, nrcpt=1 (queue active)
Jul 10 23:42:06 hetzner2 postfix/smtp[23837]: 43A7F681EA4: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[64.233.167.26]:25, delay=0.29, delays=0.01/0/0.07/0.22, dsn=2.0.0, status=sent (250 2.0.0 OK 1531266126 j140-v6si391690wmd.76 - gsmtp)
Jul 10 23:42:06 hetzner2 postfix/qmgr[1631]: 43A7F681EA4: removed
  1. so further research & digging into the code suggests that the PEAR module is only used if we want to use an external SMTP server. We don't; we want to use our local stmp server. The default is to use mail(). Since the $wgSMTP var wasn't set on the old server, the old server should have also been using mail() https://www.mediawiki.org/wiki/Manual:$wgSMTP
  2. finally, I decided to grep the maillog for one of the users = milesransaw@gmail.com. I got an error that appears to have come from Google regarding "IPv6 sending guidlines"
[root@hetzner2 htdocs]# grep -irC5 'milesransaw@gmail.com' /var/log
/var/log/maillog-20180710-Jul  9 22:21:08 hetzner2 postfix/scache[24613]: statistics: address lookup hits=0 miss=4 success=0%
/var/log/maillog-20180710-Jul  9 22:21:08 hetzner2 postfix/scache[24613]: statistics: max simultaneous domains=1 addresses=2 connection=2
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/pickup[24510]: 5039E681DE9: uid=48 from=<contact@opensourceecology.org>
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/cleanup[25828]: 5039E681DE9: message-id=<osewiki_db-wiki_.5b43e0dd4ba6c0.09234917@wiki.opensourceecology.org>
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/qmgr[1631]: 5039E681DE9: from=<contact@opensourceecology.org>, size=1236, nrcpt=1 (queue active)
/var/log/maillog-20180710:Jul  9 22:25:33 hetzner2 postfix/smtp[25830]: 5039E681DE9: to=<milesransaw@gmail.com>, relay=gmail-smtp-in.l.google.com[2a00:1450:400c:c0c::1b]:25, delay=0.36, delays=0.02/0/0.05/0.3, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[2a00:1450:400c:c0c::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . c19-v6si14396804wrc.112 - gsmtp (in reply to end of DATA command))
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/cleanup[25828]: A808D681EA4: message-id=<20180709222533.A808D681EA4@hetzner2.opensourceecology.org>
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/bounce[25832]: 5039E681DE9: sender non-delivery notification: A808D681EA4
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/qmgr[1631]: A808D681EA4: from=<>, size=3974, nrcpt=1 (queue active)
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/qmgr[1631]: 5039E681DE9: removed
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/smtp[25830]: A808D681EA4: to=<contact@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c0c::1b]:25, delay=0.15, delays=0/0/0.06/0.08, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c0c::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . s9-v6si13928170wrm.364 - gsmtp (in reply to end of DATA 
  1. I did another search for the other user = bains.hmn@gmail.com. Interestingly, I got no results at all this time
[root@hetzner2 htdocs]# grep -irC5 'bains.hmn' /var/log
[root@hetzner2 htdocs]# 
    1. when I try to email this user using Special:EmailUser, I get an error = "This user has not specified a valid email address."
    2. digging into the DB, I see this user set their email to 'bains.hmn@gmail.com', which seems fine to me
MariaDB [osewiki_db]> select user_email from wiki_user where user_name = 'Hbains' limit 10;
+---------------------+
| user_email          |
+---------------------+
| bains.hmn@gmail.com |
+---------------------+
1 row in set (0.00 sec)

MariaDB [osewiki_db]> 
    1. anyway, let me continue with the one that's not a dead-end. Unfortunately, the IPv6AuthError link just sends me to a generic google "Bulk Sending Guidelines" doc https://support.google.com/mail/?p=IPv6AuthError
    2. I already configured our spf records, but it wants some more shit. First I have to get the "Gmail settings administrator privliges" from Marcin.
  1. actually, the error log specifically mentions "550-5.7.1" more on this specific error number can be found in this "SMTP Error Reference" https://support.google.com/a/answer/3726730?hl=en
    1. 550, "5.7.1", Email quota exceeded.
    2. 550, "5.7.1", Invalid credentials for relay.
    3. 550, "5.7.1", Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been blocked. Review our Bulk Senders Guidelines.
    4. 550, "5.7.1", Our system has detected that this message is likely unsolicited mail. To reduce the amount of spam sent to Gmail, this message has been blocked. For more information, review this article.
    5. 550, "5.7.1", The IP you're using to send mail is not authorized to send email directly to our servers. Please use the SMTP relay at your service provider instead. For more information, review this article.
    6. 550, "5.7.1", The user or domain that you are sending to (or from) has a policy that prohibited the mail that you sent. Please contact your domain administrator for further details. For more information, review this article.
    7. 550, "5.7.1", Unauthenticated email is not accepted from this domain.
    8. 550, "5.7.1", Daily SMTP relay limit exceeded for customer. For more information on SMTP relay sending limits please contact your administrator or review this article.
  2. actually, I don't think any of those are correct. This appears to be caused by us not having a "AAAA" dns entry pointing to our ipv6 address, even though our server has 2x ipv6 addresses. In this case, it appears that our server contacted google from ipv6 address = "2a01:4f8:172:209e::2", but it didn't get that back when it attempted to resolve 'opensourceecology.org' https://serverfault.com/questions/732187/sendmail-can-not-deliver-to-gmail-ipv6-sending-guidelines-regarding-ptr-record
[root@hetzner2 htdocs]# ip -6 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1
	inet6 ::1/128 scope host 
	   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
	inet6 2a01:4f8:172:209e::2/64 scope global 
	   valid_lft forever preferred_lft forever
	inet6 fe80::921b:eff:fe94:7c4/64 scope link 
	   valid_lft forever preferred_lft forever
[root@hetzner2 htdocs]# 
  1. I created a AAAA (ipv6 A) dns record (on cloudflare) pointing opensourceecology.org to 2a01:4f8:172:209e::2
  2. ^ that should take some time to propagate, and--since I can't reproduce the issue, we'll just wait to see if it occurs again & check the logs again
  3. a simpler solution might be to just change postfix to use ipv4 only, but I'll do that as a last resort https://www.linuxquestions.org/questions/linux-newbie-8/gmail-this-message-does-not-meet-ipv6-sending-guidelines-regarding-ptr-records-4175598760/
  4. note that, interestingly, the ptr (reverse lookup) of our ipv4 addresses don't point to opensourceecology.org; they point to hetzner
user@personal:~$ dig +short -x 138.201.84.223
static.223.84.201.138.clients.your-server.de.
user@personal:~$ dig +short -x 138.201.84.243
static.243.84.201.138.clients.your-server.de.
user@personal:~$ 
  1. I'll have to check this tomorrow after propagation takes place. Hopefully if it matches the above commands, we've fixed the issue
user@personal:~$ dig +short -x "2a01:4f8:172:209e::2"
user@personal:~$ 
  1. in the meantime, I've manually reset the users' passwords & sent them emails manually
  1. ...
  1. Marcin had a 403 false-positive when attempting to embed an instagram feed. I whitelisted a rule & confirmed that I could submit the contents.
    1. id 973308, xss attack
    2. this fixed it & I emailed Marcin
  1. ...
  1. Marcin mentioned that a link to our wiki in a facebook feed shows a 403 on facebook. The link works, but the facebook "preview" in the comment feed shows a 403 Forbidden. Because facebook is dumb, I can't permalink directly to the comment (or maybe I could if I had a facebook account--not sure), but it's on this page https://www.facebook.com/groups/398759490316633/#
  2. I grepped through all the gzip'd modsecurity log files with the string 'Paysan' in it, and I found a bunch of results. I limited it further to include 'facebook', and found there was a useragent = facebookexternalhit/1.1. This was causing a 403 from rule id = 958291, protocol violation = "Range: field exists and begins with 0."
[root@hetzner2 httpd]# date
Wed Jul 11 04:09:26 UTC 2018
[root@hetzner2 httpd]# pwd
/var/log/httpd
[root@hetzner2 httpd]# for log in $(ls -1 | grep -i modsec | grep -i gz); do zcat $log | grep -iC50 'Paysan' ; done | grep -iC50 facebook
Server: Apache
Engine-Mode: "ENABLED"

--f6f9de1f-Z--

--1d0c4d75-A--
[08/Jul/2018:06:58:57 +0000] W0G2MRiIV543eZ9b0krEgAAAAAE 127.0.0.1 37540 127.0.0.1 8000
--1d0c4d75-B--
GET /entry/openid?Target=discussion%2F379%2Ffarming-agriculture-and-ranching-livestock-management-software%3Fpost%23Form_Body&url=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid HTTP/1.1
X-Real-IP: 203.133.174.77
X-Forwarded-Proto: https
X-Forwarded-Port: 443  
Host: forum.opensourceecology.org
User-Agent: Mozilla/5.0 (compatible; Daum/4.1; +http://cs.daum.net/faq/15/4118.html?faqId=28966)
Accept-Language: ko-kr,ko;q=0.8,en-us;q=0.5,en;q=0.3
Accept: */*
Accept-Charset: utf-8,EUC-KR;q=0.7,*;q=0.5
X-Forwarded-For: 203.133.174.77, 127.0.0.1, 127.0.0.1
hash: #forum.opensourceecology.org
Accept-Encoding: gzip  
X-Varnish: 13886122

--1d0c4d75-F--
HTTP/1.1 403 Forbidden 
Content-Length: 214
Content-Type: text/html; charset=iso-8859-1

--1d0c4d75-E--

--1d0c4d75-H--
Message: Access denied with code 403 (phase 2). Pattern match "([\\~\\!\\@\\#\\$\\%\\^\\&\\*\\(\\)\\-\\+\\=\\{\\}\\[\\]\\|\\:\\;\"\\'\\\xc2\xb4\\\xe2\x80\x99\\\xe2\x80\x98\\`\\<\\>].*?){4,}" at ARGS:Target. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "159"] [id "981173"] [rev "2"] [msg "Restricted SQL Character Anomaly Detection Alert - Total # of special characters exceeded"] [data "Matched Data: - found within ARGS:Target: discussion/379/farming-agriculture-and-ranching-livestock-management-software?post#Form_Body"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"]
Action: Intercepted (phase 2)
Stopwatch: 1531033137000684 589 (- - -)
Stopwatch2: 1531033137000684 589; combined=362, p1=87, p2=247, p3=0, p4=0, p5=27, sr=20, sw=1, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED" 

--1d0c4d75-Z--

--52e6a01c-A--
[08/Jul/2018:06:59:50 +0000] W0G2ZlraFr00R9M6JipfIQAAAAI 127.0.0.1 37638 127.0.0.1 8000
--52e6a01c-B--
GET /wiki/L%E2%80%99Atelier_Paysan HTTP/1.0
X-Real-IP: 66.220.146.185
X-Forwarded-Proto: https
X-Forwarded-Port: 443  
Host: wiki.opensourceecology.org
Accept: */*
User-Agent: facebookexternalhit/1.1
Range: bytes=0-131071  
X-Forwarded-For: 127.0.0.1
Accept-Encoding: gzip  
X-Varnish: 14190516

--52e6a01c-F--
HTTP/1.1 403 Forbidden 
Content-Length: 225
Connection: close
Content-Type: text/html; charset=iso-8859-1

--52e6a01c-E--

--52e6a01c-H--
Message: Access denied with code 403 (phase 2). String match "bytes=0-" at REQUEST_HEADERS:Range. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "428"] [id "958291"] [rev "2"] [msg "Range: field exists and begins with 0."] [data "bytes=0-131071"] [severity "WARNING"] [ver "OWASP_CRS/2.2.9"] [maturity "6"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/INVALID_HREQ"]
Action: Intercepted (phase 2)
Apache-Handler: php5-script
Stopwatch: 1531033190783654 371 (- - -)
Stopwatch2: 1531033190783654 371; combined=130, p1=87, p2=14, p3=0, p4=0, p5=29, sr=22, sw=0, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED" 

--52e6a01c-Z--

--282b2851-A--
[08/Jul/2018:06:59:51 +0000] W0G2ZxiIV543eZ9b0krEgQAAAAE 127.0.0.1 37642 127.0.0.1 8000
--282b2851-B--
GET /wiki/L%E2%80%99Atelier_Paysan HTTP/1.0
X-Real-IP: 31.13.122.23
X-Forwarded-Proto: https
X-Forwarded-Port: 443  
Host: wiki.opensourceecology.org
Accept: */*
User-Agent: facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)
Range: bytes=0-524287  
X-Forwarded-For: 127.0.0.1
Accept-Encoding: gzip  
X-Varnish: 13886168

--282b2851-F--
HTTP/1.1 403 Forbidden 
Content-Length: 225
Connection: close
Content-Type: text/html; charset=iso-8859-1

--282b2851-E--

--282b2851-H--
Message: Access denied with code 403 (phase 2). String match "bytes=0-" at REQUEST_HEADERS:Range. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "428"] [id "958291"] [rev "2"] [msg "Range: field exists and begins with 0."] [data "bytes=0-524287"] [severity "WARNING"] [ver "OWASP_CRS/2.2.9"] [maturity "6"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/INVALID_HREQ"]
Action: Intercepted (phase 2)
Apache-Handler: php5-script
Stopwatch: 1531033191244028 353 (- - -)
Stopwatch2: 1531033191244028 353; combined=129, p1=85, p2=14, p3=0, p4=0, p5=30, sr=20, sw=0, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED" 

--282b2851-Z--

--41d26d46-A--
[08/Jul/2018:07:03:42 +0000] W0G3ThiIV543eZ9b0krEhAAAAAE 127.0.0.1 38196 127.0.0.1 8000
--41d26d46-B--
GET /entry/register?Target=discussion%2F541%2Fsolved-emailprocessor.php-sends-all-emails-to-039civimail.ignored039 HTTP/1.1
X-Real-IP: 96.73.213.217
X-Forwarded-Proto: https
X-Forwarded-Port: 443  
Host: forum.opensourceecology.org
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.2057.400 QQBrowser/9.5.10158.400
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
DNT: 1
X-Forwarded-For: 96.73.213.217, 127.0.0.1, 127.0.0.1
Accept-Encoding: gzip  
hash: #forum.opensourceecology.org
X-Varnish: 14190751

--41d26d46-F--
[root@hetzner2 httpd]# 
    1. I whitelisted this rule in the vhost config file

Fri Jul 06, 2018

  1. yesterday I calculated that we should backup about ~34.87G of data from hetzner1 to glacier before shutting down the node and terminating its contract
    1. note that this size will likely be much smaller after compression.
  2. I confirmed that we have 128G of available space to '/' on hetzner2
[root@hetzner2 ~]# date
Fri Jul  6 17:59:12 UTC 2018
[root@hetzner2 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2        197G   60G  128G  32% /
devtmpfs         32G     0   32G   0% /dev
tmpfs            32G     0   32G   0% /dev/shm
tmpfs            32G  3.1G   29G  10% /run
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/md1        488M  289M  174M  63% /boot
tmpfs           6.3G     0  6.3G   0% /run/user/1005
[root@hetzner2 ~]# 
  1. we also have 165G of available space on '/usr' on hetzner1
osemain@dedi978:~$ date
Fri Jul  6 19:59:31 CEST 2018
osemain@dedi978:~$ pwd
/usr/home/osemain
osemain@dedi978:~$ df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/dm-0              9.8G  363M  8.9G   4% /
udev                    10M     0   10M   0% /dev
tmpfs                  787M  788K  786M   1% /run
/dev/dm-1              322G  142G  165G  47% /usr
tmpfs                  2.0G     0  2.0G   0% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                  2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/md0                95M   30M   66M  32% /boot
/dev/mapper/vg-tmp     4.8G  308M  4.3G   7% /tmp
/dev/mapper/vg-var      20G  2.3G   17G  13% /var
tmpfs                  2.0G     0  2.0G   0% /var/spool/exim/scan
/dev/mapper/vg-vartmp  5.8G  1.8G  3.8G  32% /var/tmp
osemain@dedi978:~$ 
  1. while it may make sense to do this upload to glacier on hetzern1, I've had hetzner1 terminate my screen sessions randomly in the past. I'd rather do it on hetzner2--where I actually have control over the server with root credentials. Therefore, I think I'll make the compressed tarballs on hetzner1 & scp them to hetzner2. On hetzner2 I'll encrypt the tarballs and create their (also encrypted) corresponding metadata files (listing all the files in the tarballs, for easy/cheaper querying later), and upload both.
  2. I created a wiki article for this CHG, which will be the canonical URL listed in the metadata files for info on what this data is that I've uploaded to glacier https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation
  3. I discovered that the DBs on hetzner1 are necessarily accessible to the public Internet (ugh).
    1. so I _could_ do the mysqldump from hetnzer2, but it's better to do it locally (data tx & sec), and then compress it _before_ sending it to hetzner2
  4. began backing-up files on osemain
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DBs
dbName=openswh
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=ose_fef
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=ose_website
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=osesurv
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=osewiki
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. began backups on oseblog
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DB
dbName=oseblog
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. began backups on osecivi
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DBs
dbName=osecivi
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=osedrupal
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. began backup of oseforum
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DB
dbName=oseforum
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. began backup of microft
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DBs
dbName=microft_db2
dbUser=microft_2
dbPass=CHANGEME
time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=microft_drupal1
dbUser=microft_d1u
dbPass=CHANGEME
time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=microft_wiki
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. after compression (but before encryption), here's the resulting sizes of the backups
    1. oseforum
oseforum@dedi978:~$ find noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \;
57M     noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
46M     noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
oseforum@dedi978:~$ 
    1. osecivi 16M
osecivi@dedi978:~/noBackup$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \;
180K	/usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2
2.3M	/usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2
12M	/usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2
1.1M	/usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2
osecivi@dedi978:~/noBackup$ 
<pre>
## oseforum 957M
<pre>
oseforum@dedi978:~$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \;
854M    /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
46M     /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
57M     /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2
oseforum@dedi978:~$ 
  1. created a safe dir on hetzner2 to store the files before encrypting & uploading to glacier
[root@hetzner2 tmp]# cd /var/tmp
[root@hetzner2 tmp]# mkdir deprecateHetzner1
[root@hetzner2 tmp]# chown root:root deprecateHetzner1/
[root@hetzner2 tmp]# chmod 0700 deprecateHetzner1/
[root@hetzner2 tmp]# ls -lah deprecateHetzner1/
total 8.0K
drwx------   2 root root 4.0K Jul  6 23:14 .
drwxrwxrwt. 52 root root 4.0K Jul  6 23:14 ..
[root@hetzner2 tmp]# 
  1. ...
  1. while the backups were running on hetzner2, I began looking into migrating hetzner2's active daily backups to s3.
  2. I logged into the aws console for the first time in a couple months, and I saw that our first bill was $5.20 in May, $1.08 in June, and $1.08 in July. Not bad, but that's going to go up after we dump all this hetzner1 stuff in glacier & start using s3 for our dailys. In any case, it'll be far, far, far less than the amount we'll be saving by ending our contract for hetzner1!
  3. I created our first bucket in s3 named 'oseserverbackups'
    1. important: it was set to "do not grant public read access to this bucket" !
  4. looks like I already created an IAM user & creds with access to both glacier & s3. I added this to hetzner2:/root/backups/backup.settings
  5. I installed aws for the root user on hetzner2, added the creds, and confirmed that I could access the new bucket
# create temporary directory
tmpdir=`mktemp -d`

pushd "$tmpdir"

/usr/bin/wget "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip"
/usr/bin/unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws

# cleanly exit
popd
/bin/rm -rf "$tmpdir"
exit 0

[root@hetzner2 tmp.vbm56CUp50]# aws --version aws-cli/1.15.53 Python/2.7.5 Linux/3.10.0-693.2.2.el7.x86_64 botocore/1.10.52 [root@hetzner2 tmp.vbm56CUp50]# aws s3 ls Unable to locate credentials. You can configure credentials by running "aws configure". [root@hetzner2 tmp.vbm56CUp50]# aws configure AWS Access Key ID [None]: <obfuscated> AWS Secret Access Key [None]: <obfuscated> Default region name [None]: us-west-2 Default output format [None]: [root@hetzner2 tmp.vbm56CUp50]# aws s3 ls 2018-07-07 00:05:22 oseserverbackups [root@hetzner2 tmp.vbm56CUp50]#

# successfully tested an upload to s3
<pre>
[root@hetzner2 backups]# cat /var/tmp/test.txt
some file destined for s3 this is
[root@hetzner2 backups]# aws s3 cp /var/tmp/test.txt s3://oseserverbackups/test.txt
upload: ../../var/tmp/test.txt to s3://oseserverbackups/test.txt 
[root@hetzner2 backups]# 
  1. confirmed that I could see the file in the aws console wui
  2. clicked the link for the object, and confirmed that I got an AccessDenied error https://s3-us-west-2.amazonaws.com/oseserverbackups/test.txt
  3. next step: enable lifecycle policy. Ideally, I want to be able to say that files uploaded on the first of the month (either by metadata of the upload timestamp or by regex match on object name) will automatically "freeze" into glaicer after a few days, and all other files will just get deleted automatically after a few days.
    1. so it looks like we can limit by object name match or by tag. It's probably better if we just have our script add a 'monthly-backup' tag to the object when uploading on the first-of-the-month, then have our lifecycle policy built around that bit https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html
    2. ugh, TIL s3 objects under the default storage class = STANDARD_IA have a minimum lifetime of 30 days. If you delete an object before 30 days, you're still charged for 30 days https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html
      1. that means we'll have to store 30 copies of our daily backups at minimum, which are 15G as of now. That's 450G stored to s3 = 0.023 * 450 = $10.35/mo * 12 = $124.2/yr. That sucks.
    3. per my previous research, we may want to look into using one of these providers instead:
      1. Backblaze B2 https://www.backblaze.com/b2/cloud-storage.html
      2. Google Nearline & Coldline https://cloud.google.com/storage/archival/
      3. Microsoft OneDrive https://onedrive.live.com/about/en-us/
  4. a quick calculation on the backblaze price calculator (biased, of course) with initial_upload=15G, monthly_upload=450G, monthly_delete=435G, monthly_download=3G gives a cost of $7.11/year. They say that would cost $30.15/yr on s3, $29.88/yr on google, and $26.10 on Microsoft. Well, at least they're wrong in a good way: it would cost more than that in s3. Hopefully they know their own pricing better. $8/year is great for backing-up 15G every day..

Thr Jul 05, 2018

  1. logged time for last week
  2. using my ose account, I uploaded the remaining misc photos from my visit to FeF to a new album https://photos.app.goo.gl/YZGTQdWnfFWcJc6p8
    1. I created a slideshow out of this & added it to the wiki here https://wiki.opensourceecology.org/wiki/Michael_Photo_Folder
  1. ...
  1. began revisiting hetzner1. We want to dump all the content onto glaicer before we terminate our contract here.
  2. I just checked the billing section. Wow, it's 74.79 eur per month. What a rip-off! Hopefully we won't have to pay that much longer..
  3. because we don't have root, this is more tricky. First, we need to get a list of all the users & investigate what data each has. If the total amount of data is small enough, we can just tar it all up & ship it to glaicer.
  4. it's not an exact test, but skimming through /etc/passwd suggests that there may be 11 users on hetzner1: osemain, osecivi, oseblog, oseforum, oseirc, oseholla, osesurv, sandbox, microft, zabbix, openswh
  5. a better test is probably checking which users' shells are /bin/bash
osemain@dedi978:~$ grep '/bin/bash' /etc/passwd
root:x:0:0:root:/root:/bin/bash
postgres:x:111:113:PostgreSQL administrator,,,:/var/lib/postgresql:/bin/bash
osemain:x:1010:1010:opensourceecology.org:/usr/home/osemain:/bin/bash
osecivi:x:1014:1014:civicrm.opensourceecology.org:/usr/home/osecivi:/bin/bash
oseblog:x:1015:1015:blog.opensourceecology.org:/usr/home/oseblog:/bin/bash
oseforum:x:1016:1016:forum.opensourceecology.org:/usr/home/oseforum:/bin/bash
osesurv:x:1020:1020:survey.opensourceecology.org:/usr/home/osesurv:/bin/bash
microft:x:1022:1022:test.opensourceecology.org:/usr/home/microft:/bin/bash
osemain@dedi978:~$ 
  1. excluding postgres & root, it looks like 6x users (many of the others are addons, and I think they're under 'osemain') = osemain, osecivi, oseblog, oseforum, osesurv, and microft
osemain@dedi978:~$ ls -lah public_html/archive/addon-domains/
total 32K
drwxr-xr-x  8 osemain users   4.0K Jan 18 16:56 .
drwxr-xr-x 13 osemain osemain 4.0K Jun 20  2017 ..
drwxr-xr-x  2 osemain users   4.0K Jul 26  2011 addontest
drwx---r-x  2 osemain users   4.0K Jul 26  2011 holla
drwx---r-x  2 osemain users   4.0K Jul 26  2011 irc
drwxr-xr-x  2 osemain osemain 4.0K Jan 18 16:59 opensourcewarehouse.org
drwxr-xr-x  2 osemain osemain 4.0K Feb 23  2012 sandbox
drwxr-xr-x 13 osemain osemain 4.0K Dec 30  2017 survey
osemain@dedi978:~$ 
  1. I was able to ssh in as osemain, osecivi, oseblog, and oseforum (using my pubkey, so I must have set this up earlier when investigating what I needed to migrate). I was _not_ able to ssh in as 'osesurv' and 'microft'
  2. on the main page of the konsoleh wui after logging in, there's 5 domains listed: "(blog|civicrm|forum|test).opensourceecology.org" and 'opensourceecology.org'. The one that stands out here is 'test.opensourceecology.org'. Upon clicking on it & digging around, I found that the username for this domain is 'microft'.
    1. In this test = microft domain (in the konsoleh wui), I tried to click 'WebFTP' (which is how I would upload my ssh key), but I got an erorr "Could not connect to server dedi978.your-server.de:21 with user microft". Indeed, it looks like the account is "suspended"
    2. to confirm further, I clicked the "FTP" link for the forum account, and confirmed that I could ftp in (ugh) as the user & password supplied by the wui (double-ugh). I tried again using the user/pass from the test=microft domain, and I could not login
    3. ^ that said, It *does* list it as using 4.49G of disk space + 3 DBs
    4. the 3 DBs are mysql = microft_db2 (24.3M), microft_drupal1 (29.7M), and microft_wiki (19.4G). Holy shit, 19.4G DB!
      1. digging into the last db's phpmyadmin, I see a table named "wiki_objectcache" that's 4.2G, "wiki_searchindex" that's 2.7G, and "wiki_text" that's 7.4G. This certainly looks like a Mediawiki DB.
      2. from the wiki_user table, the last user_id = 1038401 = Traci Clutter, which was created on 20150702040307
  3. I found that all these accounts are still accessible from a subdomain of our dedi978.your-server.de dns:
    1. http://blog.opensourceecology.org.dedi978.your-server.de/
      1. this one gives a 500 internal server error
    2. http://civicrm.opensourceecology.org.dedi978.your-server.de/
      1. this one actually loads a drupal page with a login, though the only content is " Welcome to OSE CRM / No front page content has been created yet."
    3. http://forum.opensourceecology.org.dedi978.your-server.de/
      1. this one still loads, and appears to be fully functional (ugh)
    4. http://test.opensourceecology.org.dedi978.your-server.de/
      1. this gives a 403 forbidden with the comment "You don't have permission to access / on this server." "Server unable to read htaccess file, denying access to be safe"
  4. In digging through the test.opensourceecology.org domain's settings, I found "Services -> Settings -> Block / Unblock". It (unlike the others) was listed as "Status: Blocked." So I clicked the "Unblock it" button and got "The domain has been successfully unblocked.".
    1. now WebFTP worked
    2. this now loads too http://test.opensourceecology.org.dedi978.your-server.de/ ! It's pretty horribly broken, but it appears to be a "True Fans Drupal" "Microfunding Proposal" site. I wouldn't be surprised if it got "blocked" due to being a hacked outdated version of drupal.
    3. WebFTP didn't let me upload a .ssh dir (it appears to not work with hidden dirs = '.' prefix), but I was able to FTP in (ugh)
    4. I downloaded the existing .ssh/authorized_keys file, added my key to the end of it, and re-uploaded it
    5. I was able to successfully ssh-in!
  5. ok, now that I have access to what I believe to be all the accounts, let's determine what they've got in files
  6. I found a section of the hetzner konsoleh wui that shows sizes per account (Under Statistics -> Account overview)
    1. opensourceecology.org 99.6G
    2. blog.opensourceecology.org 8.71G
    3. test.opensourceecology.org 4.49G
    4. forum.opensourceecology.org 1.15G
    5. civicrm.opensourceecology.org 170M
    6. ^ all sites display "0G" for "Traffic"
  7. osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup'
osemain@dedi978:~$ date
Fri Jul  6 01:20:41 CEST 2018
osemain@dedi978:~$ pwd
/usr/home/osemain
osemain@dedi978:~$ whoami
osemain
osemain@dedi978:~$ du -sh * --exclude='noBackup'
983M	backups
1.3M	bin
4.0K	composer.json
36K	composer.lock
4.0K	cron
4.0K	emails.txt
9.8M	extensions
16K	freemind.sourceforge.net
4.0K	id-dsa-iphone.pub
4.0K	id_rsa-hetzner
4.0K	id_rsa-hetzner.pub
288K	installer
0	jboss
470M	jboss-4.2.3.GA
4.0K	jboss-command-line.txt
234M	jdk1.6.0_29
0	jdk-6
808K	mbkp
0	opensourceecology.org
4.0K	passwd.cdb
4.0K	PCRE-patch
0	public_html
4.0K	uc?id=0B1psBarfpPkzb0JQV1B6Z01teVk
28K	users
16K	var-run
2.9M	vendor
4.0K	videos
4.0K	wiki_olddocroot
1.1M	wrapper-linux-x86-64-3.5.13
2.6G	www_logs
osemain@dedi978:~$ du -sh --exclude='noBackup'
5.7G	.
osemain@dedi978:~$ 
  1. osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup'
  2. oseblog has 2.7G
oseblog@dedi978:~$ date
Fri Jul  6 02:39:11 CEST 2018
oseblog@dedi978:~$ pwd
/usr/home/oseblog
oseblog@dedi978:~$ whoami
oseblog
oseblog@dedi978:~$ du -sh *
8.0K	bin
0	blog.opensourceecology.org
12K	cron
788K	mbkp
349M	oftblog.dump
4.0K	passwd.cdb
0	public_html
208K	tmp
104K	users
1.2G	www_logs
oseblog@dedi978:~$ du -sh
2.7G	.
oseblog@dedi978:~$ 
  1. osecivi has 44M
osecivi@dedi978:~$ date
Fri Jul  6 02:40:19 CEST 2018
osecivi@dedi978:~$ pwd
/usr/home/osecivi
osecivi@dedi978:~$ whoami
osecivi
osecivi@dedi978:~$ du -sh *
4.0K	bin
0	civicrm.opensourceecology.org
4.0K	civimail-errors.txt
2.0M	CiviMail.ignored-2011
20K	civimail.out
20K	cron
2.5M	d7-civicrm.dump
828K	d7-drupal.dump
788K	mbkp
2.2M	oftcivi.dump
8.0M	oftdrupal.dump
4.0K	passwd.cdb
0	public_html
4.0K	pw.txt
28K	users
3.4M	www_logs
osecivi@dedi978:~$ du -sh
44M	.
osecivi@dedi978:~$ 
  1. oseforum has 1.1G
oseforum@dedi978:~$ date
Fri Jul  6 02:41:04 CEST 2018
oseforum@dedi978:~$ pwd
/usr/home/oseforum
oseforum@dedi978:~$ whoami
oseforum
oseforum@dedi978:~$ du -sh *
8.0K	bin
16K	cron
0	forum.opensourceecology.org
788K	mbkp
7.5M	oftforum.dump
4.0K	passwd.cdb
0	public_html
102M	tmp
14M	users
11M	vanilla-2.0.18.1
756M	www_logs
oseforum@dedi978:~$ du -sh
1.1G	.
oseforum@dedi978:~$ 
  1. microft has 1.8G
microft@dedi978:~$ date
Fri Jul  6 02:42:00 CEST 2018
microft@dedi978:~$ pwd
/usr/home/microft
microft@dedi978:~$ whoami
microft
microft@dedi978:~$ du -sh *
8.8M	db-backup
3.6M	drupal.sql
1.6M	drush
44M	drush-backups
1.7M	git_repos
376M	mbkp-wiki-db
18M	mediawiki-1.20.2.tar.gz
4.0K	passwd.cdb
0	public_html
28K	users
1.3G	www_logs
microft@dedi978:~$ du -sh
1.8G	.
microft@dedi978:~$ 
  1. those numbers above are files only. It doesn't include mailboxes or databases. I don't really care about mailboxes (they're probably unused), but I do want to backup databases.
  2. osemain has 5 databases:
    1. openswh 7.51M
    2. ose_fef 3.65M
    3. ose_website 32M
    4. osesurv 697K
    5. osewiki 2.48G
    6. there doesn't appear to be any DBs for the 'addon' domains under this domain (addontest, holla, irc, opensourcwarehouse, sandbox, survey)
  3. oseblog has 1 db
    1. oseblog 1.23G
  4. osecivi has 2 dbs
    1. osecivi 31.3M
    2. osedrupal 8.05M
  5. oseforum has 1 db
    1. oseforum 182M
  6. microft has 3 dbs
    1. microft_db2 24.3M
    2. microft_drupal1 33.4M
    3. microft_wiki 19.5G
  7. so the total size of file data to backup are 5.7+2.7+0.04+1.1+1.8 = 11.34G
  8. and the total size of db data to backup is 0.007+0.003+0.03+0.001+2.48+1.23+0.03+0.08+0.1+0.02+0.02+0.03+19.5 = 23.53G
  9. therefore, the total size of db backups to push to glacier so we can feel safe permanently shutting down hetzner1 is 34.87G
    1. note that this size will likely be much smaller after compression.