Maltfield Log/2018 Q3: Difference between revisions
No edit summary |
No edit summary |
||
Line 7: | Line 7: | ||
# [[User:Maltfield]] | # [[User:Maltfield]] | ||
# [[Special:Contributions/Maltfield]] | # [[Special:Contributions/Maltfield]] | ||
=Wed Aug 01, 2018= | |||
# Marcin responded to me about phplist. ose does prefer a free floss solution. I think I understand though, despite the cost, why most low-budget nonprofits choose mailchimp. Their analytics & a/b tests provide the development/marketing teams essential tools in increase visibility, clicks, and donations that justify the costs. So I'll proceed by researching phplist and its floss alternatives to see which provides the best analytics and a/b tests | |||
## https://www.quora.com/What-are-the-cheaper-better-alternatives-to-MailChimp | |||
## https://alternativeto.net/software/phplist/?license=opensource | |||
## https://mailchimp.com/help/about-ab-testing-campaigns/ | |||
## https://mailchimp.com/features/ab-testing/ | |||
# Marcin asked me to build d3d.opensourceecology.org as my highest priority task. I said I could do this by mid-next-week, and I asked Catarina to buy the latest version of the theme she wants & send me the zip, which she said she would achieve within a few days. | |||
## first step: dns. I created d3d.opensourceecology.org in our cloudflare account to point to 138.201.84.243 | |||
## cloudlfare wouldn't let me login without entering a 2fa token sent (insecurely) via email to our shared cloudflare@opensourceecology.org account | |||
## gmail wouldn't let me login to the above email address for the same reason. They wanted me to enter a phone number for a 2fa token to be sent (insecurely) via sms | |||
## this shit is fucking annoying. I went to the g suite as superuser to see if I could permanently disable these god damn bullshit 2fa requests from gmail. I found that Users -> Cloud Flare -> Security had a button to "Turn off for 10 mins" next to the "Login Challenge" section with the description "Turn off identity questions for 10 minutes after a suspicious attempt to sign in. Disabling login challenge will make the account less secure" | |||
## we have a fucking 100-character, uniuqe password securely stored in a keepass. I understand what google is doing, but that's for people who reuse shitty passwords for many sites. We should be able to disable this fucking bug indefinitely on a per-account basis >:O | |||
## finally, after the temp disable, I was able to login to the gmail, get the token for cloudlfare, login to cloudflare, and then add the dns record for 'd3d.opensourceecology.org' | |||
# I generated a 70-character password for a new d3d_user mysql user and added it to keepass | |||
# I went to create the cert, but certbot failed! | |||
<pre> | |||
[root@hetzner2 htdocs]# certbot certificates | |||
Traceback (most recent call last): | |||
File "/bin/certbot", line 9, in <module> | |||
load_entry_point('certbot==0.19.0', 'console_scripts', 'certbot')() | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 479, in load_entry_point | |||
return get_distribution(dist).load_entry_point(group, name) | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2703, in load_entry_point | |||
return ep.load() | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2321, in load | |||
return self.resolve() | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2327, in resolve | |||
module = import(self.module_name, fromlist=['name'], level=0) | |||
File "/usr/lib/python2.7/site-packages/certbot/main.py", line 19, in <module> | |||
from certbot import client | |||
File "/usr/lib/python2.7/site-packages/certbot/client.py", line 11, in <module> | |||
from acme import client as acme_client | |||
File "/usr/lib/python2.7/site-packages/acme/client.py", line 31, in <module> | |||
requests.packages.urllib3.contrib.pyopenssl.inject_into_urllib3() # type: ignore | |||
File "/usr/lib/python2.7/site-packages/urllib3/contrib/pyopenssl.py", line 118, in inject_into_urllib3 | |||
_validate_dependencies_met() | |||
File "/usr/lib/python2.7/site-packages/urllib3/contrib/pyopenssl.py", line 153, in _validate_dependencies_met | |||
raise ImportError("'pyOpenSSL' module missing required functionality. " | |||
ImportError: 'pyOpenSSL' module missing required functionality. Try upgrading to v0.14 or newer. | |||
[root@hetzner2 htdocs]# | |||
</pre> | |||
# I (foolishly) was playing with pip earlier to get b2 working. I think this is why. I fucking hate pip, and shouldn't have touched it on a production box. https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q3#Tue_Jul_17.2C_2018 | |||
## in effort to get b2 working, I installed pip from the yum repo, had pip update itself, installed setuptools>=20.2 from pip, and finally installed this long list via pip"six, python-dateutil, backports.functools-lru-cache, arrow, funcsigs, logfury, certifi, chardet, urllib3, idna, requests, tqdm, futures, b2" | |||
## even ^ that failed (fucking pip) so I resorted to installing from git | |||
# I removed the pip-installed certbot and related depends, then installed with yum | |||
<pre> | |||
pip uninstall certbot cerbot-apache certbot-nginx requests six urllib3 acme | |||
yum install python-requests python-six python-urllib3 | |||
</pre> | |||
# this got me a new error with acme | |||
<pre> | |||
[root@hetzner2 htdocs]# certbot certificates | |||
Traceback (most recent call last): | |||
File "/bin/certbot", line 5, in <module> | |||
from pkg_resources import load_entry_point | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3098, in <module> | |||
@_call_aside | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3082, in _call_aside | |||
f(*args, **kwargs) | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3111, in _initialize_master_working_set | |||
working_set = WorkingSet._build_master() | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 573, in _build_master | |||
ws.require(requires) | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 891, in require | |||
needed = self.resolve(parse_requirements(requirements)) | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 777, in resolve | |||
raise DistributionNotFound(req, requirers) | |||
pkg_resources.DistributionNotFound: The 'acme>0.24.0' distribution was not found and is required by certbot | |||
[root@hetzner2 htdocs]# | |||
</pre> | |||
# I'm just going to fucking uinstall pip and all its packages | |||
<pre> | |||
[root@hetzner2 htdocs]# packages=$(pip list 2>&1 | tail -n+3 | head -n-2 | awk '{print $1}') | |||
[root@hetzner2 htdocs]# echo $packages | |||
arrow b2 backports.functools-lru-cache backports.ssl-match-hostname boto certbot certifi cffi chardet ConfigArgParse configobj cryptography decorator duplicity enum34 fasteners funcsigs future futures GnuPGInterface google-api-python-client httplib2 idna iniparse ipaddress IPy iso8601 javapackages josepy keyring lockfile logfury lxml mock ndg-httpsclient oauth2client paramiko parsedatetime perf pexpect pip ply policycoreutils-default-encoding psutil pyasn1 pyasn1-modules pycparser pycurl PyDrive pygobject pygpgme pyliblzma pyOpenSSL pyparsing pyRFC3339 python-augeas python-dateutil python-gflags python-linux-procfs python2-pythondialog pytz pyudev pyxattr PyYAML requests requests-toolbelt rsa schedutils seobject sepolicy setuptools slip slip.dbus SQLAlchemy tqdm uritemplate urlgrabber yum-metadata-parser zope.component zope.event zope.interface | |||
[root@hetzner2 htdocs]# | |||
[root@hetzner2 htdocs]# for p in $packages; do pip uninstall $p; done | |||
[root@hetzner2 htdocs]# yum remove python-pip | |||
... | |||
Removed: | |||
python2-pip.noarch 0:8.1.2-6.el7 | |||
Complete! | |||
[root@hetzner2 htdocs]# yum install certbot | |||
... | |||
Updated: | |||
certbot.noarch 0:0.26.1-1.el7 | |||
Dependency Updated: | |||
python2-certbot.noarch 0:0.26.1-1.el7 | |||
Complete! | |||
[root@hetzner2 htdocs]# | |||
</pre> | |||
# I still have errors, but a new one. Fucking pip! | |||
<pre> | |||
[root@hetzner2 htdocs]# certbot certificates | |||
Traceback (most recent call last): | |||
File "/bin/certbot", line 5, in <module> | |||
from pkg_resources import load_entry_point | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3098, in <module> | |||
@_call_aside | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3082, in _call_aside | |||
f(*args, **kwargs) | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3111, in _initialize_master_working_set | |||
working_set = WorkingSet._build_master() | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 573, in _build_master | |||
ws.require(requires) | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 891, in require | |||
needed = self.resolve(parse_requirements(requirements)) | |||
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 777, in resolve | |||
raise DistributionNotFound(req, requirers) | |||
pkg_resources.DistributionNotFound: The 'parsedatetime>=1.3' distribution was not found and is required by certbot | |||
[root@hetzner2 htdocs]# | |||
</pre> | |||
# I slowly began to install these packages back from yum as needed, but I hit an issue with urrlib3. A find shows another version as installed by aws cli | |||
<pre> | |||
[root@hetzner2 htdocs]# find / -name urllib3 | |||
/root/.local/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3 | |||
/root/.local/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3 | |||
[root@hetzner2 htdocs]# mv /root/.local /root/.local.bak | |||
[root@hetzner2 htdocs]# | |||
</pre> | |||
# after install & move | |||
<pre> | |||
ImportError: No module named 'requests.packages.urllib3' | |||
[root@hetzner2 htdocs]# find / -name urllib3 | |||
/root/.local.bak/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3 | |||
/root/.local.bak/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3 | |||
/usr/lib/python2.7/site-packages/urllib3 | |||
[root@hetzner2 htdocs]# | |||
</pre> | |||
# here's all the packages; not sure why it can't fucking find urllib3 | |||
<pre> | |||
[root@hetzner2 htdocs]# ls /usr/lib/python2.7/site-packages/ | |||
acme GnuPGInterface.pyc python_augeas-0.5.0-py2.7.egg-info | |||
acme-0.25.1-py2.7.egg-info GnuPGInterface.pyo python_dateutil-2.7.3.dist-info | |||
ANSI.py html python_gflags-2.0-py2.7.egg-info | |||
ANSI.pyc http python_linux_procfs-0.4.9-py2.7.egg-info | |||
ANSI.pyo idna pytz | |||
augeas.py idna-2.4-py2.7.egg-info pytz-2016.10-py2.7.egg-info | |||
augeas.pyc iniparse pyudev | |||
augeas.pyo iniparse-0.4-py2.7.egg-info pyudev-0.15-py2.7.egg-info | |||
backports ipaddress-1.0.16-py2.7.egg-info queue | |||
backports.functools_lru_cache-1.2.1-py2.7.egg-info ipaddress.py repoze | |||
backports.functools_lru_cache-1.2.1-py2.7-nspkg.pth ipaddress.pyc repoze.lru-0.4-py2.7.egg-info | |||
builtins ipaddress.pyo repoze.lru-0.4-py2.7-nspkg.pth | |||
cached_property-1.3.0-py2.7.egg-info IPy-0.75-py2.7.egg-info reprlib | |||
cached_property.py IPy.py requests | |||
cached_property.pyc IPy.pyc requests-2.19.1-py2.7.egg | |||
cached_property.pyo IPy.pyo requests-2.6.0-py2.7.egg-info | |||
certbot josepy requests_toolbelt | |||
certbot-0.26.1-py2.7.egg-info josepy-1.1.0-py2.7.egg-info requests_toolbelt-0.8.0-py2.7.egg-info | |||
chardet jsonschema rpmUtils | |||
chardet-2.2.1-py2.7.egg-info jsonschema-2.5.1-py2.7.egg-info rsa | |||
ConfigArgParse-0.11.0-py2.7.egg-info libfuturize rsa-3.4.1-py2.7.egg-info | |||
configargparse.py libpasteurize screen.py | |||
configargparse.pyc List-1.3.0-py2.7.egg screen.pyc | |||
configargparse.pyo lockfile screen.pyo | |||
configobj-4.7.2-py2.7.egg-info lockfile-0.9.1-py2.7.egg-info setuptools | |||
configobj.py _markupbase setuptools-40.0.0.dist-info | |||
configobj.pyc mock-1.0.1-py2.7.egg-info six-1.9.0-py2.7.egg-info | |||
configobj.pyo mock.py six.py | |||
copyreg mock.pyc six.pyc | |||
dateutil mock.pyo six.pyo | |||
dialog.py ndg slip | |||
dialog.pyc ndg_httpsclient-0.3.2-py2.7.egg-info slip-0.4.0-py2.7.egg-info | |||
dialog.pyo ndg_httpsclient-0.3.2-py2.7-nspkg.pth slip.dbus-0.4.0-py2.7.egg-info | |||
docopt-0.6.2-py2.7.egg-info parsedatetime socketserver | |||
docopt.py parsedatetime-2.4-py2.7.egg-info texttable-1.3.1-py2.7.egg-info | |||
docopt.pyc past texttable.py | |||
docopt.pyo pexpect-2.3-py2.7.egg-info texttable.pyc | |||
_dummy_thread pexpect.py texttable.pyo | |||
easy-install.pth pexpect.pyc _thread | |||
easy_install.py pexpect.pyo tkinter | |||
easy_install.pyc pkg_resources tqdm-4.23.4-py2.7.egg | |||
enum ply tuned | |||
enum34-1.0.4-py2.7.egg-info ply-3.4-py2.7.egg-info uritemplate | |||
fdpexpect.py procfs uritemplate-3.0.0-py2.7.egg-info | |||
fdpexpect.pyc pxssh.py urlgrabber | |||
fdpexpect.pyo pxssh.pyc urlgrabber-3.10-py2.7.egg-info | |||
firewall pxssh.pyo urllib3 | |||
FSM.py pyasn1 urllib3-1.10.2-py2.7.egg-info | |||
FSM.pyc pyasn1-0.1.9-py2.7.egg-info validate.py | |||
FSM.pyo pyasn1_modules validate.pyc | |||
future pyasn1_modules-0.0.8-py2.7.egg-info validate.pyo | |||
future-0.16.0-py2.7.egg-info pycparser winreg | |||
gflags.py pycparser-2.14-py2.7.egg-info xmlrpc | |||
gflags.pyc pyparsing-1.5.6-py2.7.egg-info yum | |||
gflags.pyo pyparsing.py zope | |||
gflags_validators.py pyparsing.pyc zope.component-4.1.0-py2.7.egg-info | |||
gflags_validators.pyc pyparsing.pyo zope.component-4.1.0-py2.7-nspkg.pth | |||
gflags_validators.pyo pyrfc3339 zope.event-4.0.3-py2.7.egg-info | |||
GnuPGInterface-0.3.2-py2.7.egg-info pyRFC3339-1.0-py2.7.egg-info zope.event-4.0.3-py2.7-nspkg.pth | |||
GnuPGInterface.py python2_pythondialog-3.3.0-py2.7.egg-info | |||
[root@hetzner2 htdocs]# | |||
</pre> | |||
# I uninstalled a lot of packages, then did the `ls` again. I've seen some coorelation between urllib3 & requests, so maybe it's this lingering requests dir... | |||
<pre> | |||
[root@hetzner2 htdocs]# yum remove python-parsedatetime python-mock python-josepy python-cryptography python-configargparse python-future python-six python-idna python-requests python-chardet python-requests-toolbelt python-urllib3 pyOpenSSL | |||
... | |||
[root@hetzner2 htdocs]# ls /usr/lib/python2.7/site-packages/ | |||
ANSI.py GnuPGInterface-0.3.2-py2.7.egg-info python_augeas-0.5.0-py2.7.egg-info | |||
ANSI.pyc GnuPGInterface.py python_dateutil-2.7.3.dist-info | |||
ANSI.pyo GnuPGInterface.pyc python_gflags-2.0-py2.7.egg-info | |||
augeas.py GnuPGInterface.pyo python_linux_procfs-0.4.9-py2.7.egg-info | |||
augeas.pyc iniparse pytz | |||
augeas.pyo iniparse-0.4-py2.7.egg-info pytz-2016.10-py2.7.egg-info | |||
backports ipaddress-1.0.16-py2.7.egg-info pyudev | |||
backports.functools_lru_cache-1.2.1-py2.7.egg-info ipaddress.py pyudev-0.15-py2.7.egg-info | |||
backports.functools_lru_cache-1.2.1-py2.7-nspkg.pth ipaddress.pyc repoze | |||
cached_property-1.3.0-py2.7.egg-info ipaddress.pyo repoze.lru-0.4-py2.7.egg-info | |||
cached_property.py IPy-0.75-py2.7.egg-info repoze.lru-0.4-py2.7-nspkg.pth | |||
cached_property.pyc IPy.py requests-2.19.1-py2.7.egg | |||
cached_property.pyo IPy.pyc rpmUtils | |||
configobj-4.7.2-py2.7.egg-info IPy.pyo rsa | |||
configobj.py jsonschema rsa-3.4.1-py2.7.egg-info | |||
configobj.pyc jsonschema-2.5.1-py2.7.egg-info screen.py | |||
configobj.pyo List-1.3.0-py2.7.egg screen.pyc | |||
dateutil lockfile screen.pyo | |||
dialog.py lockfile-0.9.1-py2.7.egg-info setuptools | |||
dialog.pyc pexpect-2.3-py2.7.egg-info setuptools-40.0.0.dist-info | |||
dialog.pyo pexpect.py slip | |||
docopt-0.6.2-py2.7.egg-info pexpect.pyc slip-0.4.0-py2.7.egg-info | |||
docopt.py pexpect.pyo slip.dbus-0.4.0-py2.7.egg-info | |||
docopt.pyc pkg_resources texttable-1.3.1-py2.7.egg-info | |||
docopt.pyo ply texttable.py | |||
easy-install.pth ply-3.4-py2.7.egg-info texttable.pyc | |||
easy_install.py procfs texttable.pyo | |||
easy_install.pyc pxssh.py tqdm-4.23.4-py2.7.egg | |||
enum pxssh.pyc tuned | |||
enum34-1.0.4-py2.7.egg-info pxssh.pyo uritemplate | |||
fdpexpect.py pyasn1 uritemplate-3.0.0-py2.7.egg-info | |||
fdpexpect.pyc pyasn1-0.1.9-py2.7.egg-info urlgrabber | |||
fdpexpect.pyo pyasn1_modules urlgrabber-3.10-py2.7.egg-info | |||
firewall pyasn1_modules-0.0.8-py2.7.egg-info validate.py | |||
FSM.py pycparser validate.pyc | |||
FSM.pyc pycparser-2.14-py2.7.egg-info validate.pyo | |||
FSM.pyo pyparsing-1.5.6-py2.7.egg-info yum | |||
gflags.py pyparsing.py zope | |||
gflags.pyc pyparsing.pyc zope.component-4.1.0-py2.7.egg-info | |||
gflags.pyo pyparsing.pyo zope.component-4.1.0-py2.7-nspkg.pth | |||
gflags_validators.py pyrfc3339 zope.event-4.0.3-py2.7.egg-info | |||
gflags_validators.pyc pyRFC3339-1.0-py2.7.egg-info zope.event-4.0.3-py2.7-nspkg.pth | |||
gflags_validators.pyo python2_pythondialog-3.3.0-py2.7.egg-info | |||
[root@hetzner2 htdocs]# find / -name requests | |||
/root/.local.bak/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests | |||
/root/.local.bak/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests | |||
/usr/lib/python2.7/site-packages/requests-2.19.1-py2.7.egg/requests | |||
[root@hetzner2 htdocs]# | |||
</pre> | |||
# fucking hell pip. I never should have played with the devil here. Pip is the god damn devil. Let's try to import it ourselves. | |||
<pre> | |||
[maltfield@hetzner2 ~]$ date | |||
Wed Aug 1 21:03:41 UTC 2018 | |||
[maltfield@hetzner2 ~]$ pwd | |||
/home/maltfield | |||
[maltfield@hetzner2 ~]$ which python | |||
/usr/bin/python | |||
[maltfield@hetzner2 ~]$ python | |||
Python 2.7.5 (default, Aug 4 2017, 00:39:18) | |||
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2 | |||
Type "help", "copyright", "credits" or "license" for more information. | |||
>>> import math; | |||
>>> import requests; | |||
Traceback (most recent call last): | |||
File "<stdin>", line 1, in <module> | |||
File "/usr/lib/python2.7/site-packages/requests/init.py", line 58, in <module> | |||
from . import utils | |||
File "/usr/lib/python2.7/site-packages/requests/utils.py", line 32, in <module> | |||
from .exceptions import InvalidURL | |||
File "/usr/lib/python2.7/site-packages/requests/exceptions.py", line 10, in <module> | |||
from .packages.urllib3.exceptions import HTTPError as BaseHTTPError | |||
File "/usr/lib/python2.7/site-packages/requests/packages/init.py", line 95, in load_module | |||
raise ImportError("No module named '%s'" % (name,)) | |||
ImportError: No module named 'requests.packages.urllib3' | |||
</pre> | |||
# the actual error is in /usr/lib/python2.7/site-packages/requests/exceptions.py on line 10 | |||
<pre> | |||
from .packages.urllib3.exceptions import HTTPError as BaseHTTPError | |||
</pre> | |||
# using `rpm -V`, I got a list of corrupted python packages | |||
<pre> | |||
[root@hetzner2 ~]# for package in $(rpm -qa | grep -i python); do if `rpm -V $package` ; then echo $package; fi; done | |||
python-javapackages-3.4.1-11.el7.noarch | |||
python2-iso8601-0.1.11-7.el7.noarch | |||
python-httplib2-0.9.2-1.el7.noarch | |||
python2-keyring-5.0-3.el7.noarch | |||
python-backports-1.0-8.el7.x86_64 | |||
python-decorator-3.4.0-3.el7.noarch | |||
python-lxml-3.2.1-4.el7.x86_64 | |||
python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch | |||
[root@hetzner2 ~]# | |||
</pre> | |||
# I tried to remove & re-install these packages | |||
<pre> | |||
[root@hetzner2 ~]# yum remove python-javapckages python2-iso8601 python-httplib2 python2-keyring python-backports python-decorator python-lxml python-backports-ssl_match_hostname | |||
... | |||
[root@hetzner2 ~]# yum install python-javapckages python2-iso8601 python-httplib2 python2-keyring python-backports python-decorator python-lxml python-backports-ssl_match_hostname certbot jitsi java-1.8.0-openjdk tuned | |||
</pre> | |||
# finally, it works again! Lesson learned: never, ever use pip. | |||
<pre> | |||
[root@hetzner2 ~]# certbot | |||
Saving debug log to /var/log/letsencrypt/letsencrypt.log | |||
Certbot doesn't know how to automatically configure the web server on this system. However, it can still get a certificate for you. Please run "certbot certonly" to do so. You'll need to manually configure your web server to use the resulting certificate. | |||
[root@hetzner2 ~]# certbot certificates | |||
Saving debug log to /var/log/letsencrypt/letsencrypt.log | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
Found the following certs: | |||
Certificate Name: opensourceecology.org | |||
Domains: opensourceecology.org awstats.opensourceecology.org fef.opensourceecology.org forum.opensourceecology.org munin.opensourceecology.org oswh.opensourceecology.org staging.opensourceecology.org wiki.opensourceecology.org www.opensourceecology.org | |||
Expiry Date: 2018-08-22 21:36:52+00:00 (VALID: 21 days) | |||
Certificate Path: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem | |||
Private Key Path: /etc/letsencrypt/live/opensourceecology.org/privkey.pem | |||
Certificate Name: openbuildinginstitute.org | |||
Domains: www.openbuildinginstitute.org awstats.openbuildinginstitute.org openbuildinginstitute.org seedhome.openbuildinginstitute.org | |||
Expiry Date: 2018-09-11 03:20:08+00:00 (VALID: 40 days) | |||
Certificate Path: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem | |||
Private Key Path: /etc/letsencrypt/live/openbuildinginstitute.org/privkey.pem | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
[root@hetzner2 ~]# | |||
</pre> | |||
# undoing my changes during testing, I'll move the /root/.local dir back | |||
<pre> | |||
[root@hetzner2 ~]# aws | |||
-bash: aws: command not found | |||
[root@hetzner2 ~]# mv .local.bak .local | |||
[root@hetzner2 ~]# aws | |||
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] | |||
To see help text, you can run: | |||
aws help | |||
aws <command> help | |||
aws <command> <subcommand> help | |||
aws: error: too few arguments | |||
[root@hetzner2 ~]# | |||
</pre> | |||
# all packages look good, and `certbot` still works! | |||
<pre> | |||
[root@hetzner2 ~]# for package in $(rpm -qa | grep -i python); do if `rpm -V $package` ; then echo $package; fi; done[root@hetzner2 ~]# certbot certificates | |||
Saving debug log to /var/log/letsencrypt/letsencrypt.log | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
Found the following certs: | |||
Certificate Name: opensourceecology.org | |||
Domains: opensourceecology.org awstats.opensourceecology.org fef.opensourceecology.org forum.opensourceecology.org munin.opensourceecology.org oswh.opensourceecology.org staging.opensourceecology.org wiki.opensourceecology.org www.opensourceecology.org | |||
Expiry Date: 2018-08-22 21:36:52+00:00 (VALID: 21 days) | |||
Certificate Path: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem | |||
Private Key Path: /etc/letsencrypt/live/opensourceecology.org/privkey.pem | |||
Certificate Name: openbuildinginstitute.org | |||
Domains: www.openbuildinginstitute.org awstats.openbuildinginstitute.org openbuildinginstitute.org seedhome.openbuildinginstitute.org | |||
Expiry Date: 2018-09-11 03:20:08+00:00 (VALID: 40 days) | |||
Certificate Path: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem | |||
Private Key Path: /etc/letsencrypt/live/openbuildinginstitute.org/privkey.pem | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
[root@hetzner2 ~]# | |||
</pre> | |||
# b2 is no longer accessible | |||
<pre> | |||
[root@hetzner2 ~]# su - b2user | |||
Last login: Sat Jul 28 19:48:19 UTC 2018 on pts/105 | |||
[b2user@hetzner2 ~]$ b2 | |||
-bash: b2: command not found | |||
[b2user@hetzner2 ~]$ | |||
</pre> | |||
# I went to install b2 for the b2user, but it complains about setuptools and tells me to use pip to get setuptools >=20.2. Fuck that! | |||
<pre> | |||
[b2user@hetzner2 ~]$ mkdir sandbox | |||
[b2user@hetzner2 ~]$ cd sandbox/ | |||
[b2user@hetzner2 sandbox]$ git clone https://github.com/Backblaze/B2_Command_Line_Tool.git | |||
Cloning into 'B2_Command_Line_Tool'... | |||
remote: Counting objects: 6010, done. | |||
remote: Total 6010 (delta 0), reused 0 (delta 0), pack-reused 6009 | |||
Receiving objects: 100% (6010/6010), 1.49 MiB | 1.65 MiB/s, done. | |||
Resolving deltas: 100% (4342/4342), done. | |||
[b2user@hetzner2 sandbox]$ cd B2_Command_Line_Tool/ | |||
[b2user@hetzner2 B2_Command_Line_Tool]$ python setup.py install | |||
setuptools 20.2 or later is required. To fix, try running: pip install "setuptools>=20.2" | |||
[b2user@hetzner2 B2_Command_Line_Tool]$ | |||
</pre> | |||
# setuptools from yum is 0.9.8-7 | |||
<pre> | |||
[root@hetzner2 site-packages]# rpm -qa | grep setuptools | |||
python-setuptools-0.9.8-7.el7.noarch | |||
[root@hetzner2 site-packages]# | |||
</pre> | |||
# I'll have to fix b2 for the b2user. Perhaps I can figure out how to setup a separate env. I'll also have to finish setting up the d3d site, but at least certbot is fixed. That could have been nasty if our cert had expired with `certbot` broken; all the sites would become inaccessible after 90 days! Fuck pip. Fuck pip. Fuck pip. Stick to the distro package manager. Always stick to the distro package manager. | |||
# I went ahead and ran the letsencrypt renew script--which had been failing before. It renewed the ose cert. | |||
<pre> | |||
[root@hetzner2 site-packages]# /root/bin/letsencrypt/renew.sh | |||
Saving debug log to /var/log/letsencrypt/letsencrypt.log | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
Processing /etc/letsencrypt/renewal/opensourceecology.org.conf | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
Cert is due for renewal, auto-renewing... | |||
Plugins selected: Authenticator webroot, Installer None | |||
Starting new HTTPS connection (1): acme-v02.api.letsencrypt.org | |||
Renewing an existing certificate | |||
Performing the following challenges: | |||
http-01 challenge for awstats.opensourceecology.org | |||
http-01 challenge for fef.opensourceecology.org | |||
http-01 challenge for forum.opensourceecology.org | |||
http-01 challenge for munin.opensourceecology.org | |||
http-01 challenge for opensourceecology.org | |||
http-01 challenge for oswh.opensourceecology.org | |||
http-01 challenge for staging.opensourceecology.org | |||
http-01 challenge for wiki.opensourceecology.org | |||
http-01 challenge for www.opensourceecology.org | |||
Waiting for verification... | |||
Cleaning up challenges | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
new certificate deployed without reload, fullchain is | |||
/etc/letsencrypt/live/opensourceecology.org/fullchain.pem | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
Processing /etc/letsencrypt/renewal/openbuildinginstitute.org.conf | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
Cert not yet due for renewal | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
The following certs are not due for renewal yet: | |||
/etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem expires on 2018-09-11 (skipped) | |||
Congratulations, all renewals succeeded. The following certs have been renewed: | |||
/etc/letsencrypt/live/opensourceecology.org/fullchain.pem (success) | |||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |||
Redirecting to /bin/systemctl reload nginx.service | |||
[root@hetzner2 site-packages]# | |||
</pre> | |||
# I went to www.opensourceecology.org in my browser and confirmed that the cert listed was issued from today's date (2018-08-01) | |||
=Tue Jul 31, 2018= | |||
# Marcin asked me if we should accept ether. I said we should (if his hardware wallet supports ether), but that we should not think that ether is as a hedge against bitcoin failing. | |||
# the glacier upload failed again for all 3x files. I threw this damn thing in a while loop; it will stop attempting to re-upload after the files are deleted by the same retry script | |||
<pre> | |||
[root@hetzner2 ~]# while grep -E 'gpg$'` ; do date; /root/bin/retryUploadToGlacier.sh; sleep 1; done | |||
Tue Jul 31 23:37:45 UTC 2018 | |||
+ backupArchives='/var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_oseblog.tar.gpg /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osemain.tar.gpg' | |||
... | |||
</pre> | |||
# did some research into phplist vs mailchimp. Sent Marcin an email asking why he chose phplist, if that research is final, if I should continue this research, or if I should just plunge forward in installing/configuring phplist. | |||
## mailchimp is probably the most popular option. It's free up to 2,000 subscribers and 12,000 emails per month https://mailchimp.com/pricing/ | |||
# ... | |||
# going down my TODO items, there's CODE https://wiki.opensourceecology.org/wiki/Google_docs_alternative | |||
## at last look, we couldn't use them because there was no support for hyperlinks in impress/powerpoint. They fixed this 4 months after I submitted a bug report requesting the feature. | |||
## I made another request for drawing arrows/lines/shapes, but there's been no status updates here https://bugs.documentfoundation.org/show_bug.cgi?id=113386 | |||
### I added a comment asking for an ETA | |||
# ... | |||
# I sent Marcin a follow-up email asking if he had a chance to test Janus yet. This is blocking my Jitsi poc. | |||
# ... | |||
# the only other task I can think of is mantis as a bug tracker. The question is: does itr meet Marcin's requirements. Specifically, he wanted to have an "issue" track the progress of each machine in an automated fashion using templates. For example, we'd want an easy button to generate a "Development Template" such as this one https://docs.google.com/spreadsheets/d/1teVrReHnbS1xQFDJJhhmJX5_Lz2d5WkhJlq5yyeIfQw/edit#gid=1430144236 | |||
## it looks like this may already exist simply by using the "create clone" button on mantis https://www.mantisbt.org/bugs/view.php?id=8308 | |||
=Mon Jul 30, 2018= | |||
# I checked on the status of the backups of hetzner1 being uploaded to glacier. Apparently the upload step failed. | |||
<pre> | |||
[root@hetzner2 bin]# ./uploadToGlacier.sh | |||
+ backupDirs='/var/tmp/deprecateHetzner1/microft /var/tmp/deprecateHetzner1/oseforum /var/tmp/deprecateHetzner1/osemain /var/tmp/deprecateHetzner1/osecivi /var/tmp/deprecateHetzner1/oseblog' | |||
+ syncDir=/var/tmp/deprecateHetzner1/sync/ | |||
... | |||
+ gpg --symmetric --cipher-algo aes --batch --passphrase-file /root/backups/ose-backups-cron.key /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar | |||
+ rm /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar | |||
+ /root/bin/glacier.py --region us-west-2 archive upload deleteMeIn2020 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg | |||
Traceback (most recent call last): | |||
File "/root/bin/glacier.py", line 736, in <module> | |||
main() | |||
File "/root/bin/glacier.py", line 732, in main | |||
App().main() | |||
File "/root/bin/glacier.py", line 718, in main | |||
self.args.func() | |||
File "/root/bin/glacier.py", line 500, in archive_upload | |||
file_obj=self.args.file, description=name) | |||
File "/usr/lib/python2.7/site-packages/boto/glacier/vault.py", line 177, in create_archive_from_file | |||
writer.write(data) | |||
File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 219, in write | |||
self.partitioner.write(data) | |||
File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 61, in write | |||
self._send_part() | |||
File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 75, in _send_part | |||
self.send_fn(part) | |||
File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 222, in _upload_part | |||
self.uploader.upload_part(self.next_part_index, part_data) | |||
File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 129, in upload_part | |||
content_range, part_data) | |||
File "/usr/lib/python2.7/site-packages/boto/glacier/layer1.py", line 1279, in upload_part | |||
response_headers=response_headers) | |||
File "/usr/lib/python2.7/site-packages/boto/glacier/layer1.py", line 119, in make_request | |||
raise UnexpectedHTTPResponseError(ok_responses, response) | |||
boto.glacier.exceptions.UnexpectedHTTPResponseError: Expected 204, got (408, code=RequestTimeoutException, message=Request timed out.) | |||
+ 1 -eq 0 | |||
</pre> | |||
# that last line above was a check to see if the upload was successful. I forgot about this issue (god damn it glacier). Anyway, I had coded this in-place so that successful upload attempts would have their file deleted; failed files would still exist. A file listing shows that 3 of the 5 uploads failed | |||
<pre> | |||
[root@hetzner2 sync]# date | |||
Mon Jul 30 19:43:19 UTC 2018 | |||
[root@hetzner2 sync]# pwd | |||
/var/tmp/deprecateHetzner1/sync | |||
[root@hetzner2 sync]# ls -lah | |||
total 14G | |||
drwxr-xr-x 2 root root 4.0K Jul 28 23:34 . | |||
drwx------ 10 root root 4.0K Jul 28 22:26 .. | |||
-rw-r--r-- 1 root root 810K Jul 28 22:37 hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2 | |||
-rw-r--r-- 1 root root 6.2G Jul 28 22:41 hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg | |||
-rw-r--r-- 1 root root 4.0M Jul 28 23:31 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2 | |||
-rw-r--r-- 1 root root 4.4G Jul 28 23:34 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.tar.gpg | |||
-rw-r--r-- 1 root root 100K Jul 28 23:26 hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2 | |||
-rw-r--r-- 1 root root 102K Jul 28 23:03 hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2 | |||
-rw-r--r-- 1 root root 187K Jul 28 23:13 hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2 | |||
-rw-r--r-- 1 root root 3.3G Jul 28 23:14 hetzner1_final_backup_before_hetzner1_deprecation_osemain.tar.gpg | |||
[root@hetzner2 sync]# | |||
</pre> | |||
# I copied the script hetzner1:/home/marcin_ose/backups/retryUploadToGlacier.sh hetzner2:/root/bin/retryUploadToGlacier.sh | |||
# I changed the archive list to just those 3 failed listed above. | |||
# I executed 'retryUploadToGlacier.sh'. Hopefully it'll be finished by tomorrow. | |||
# ... | |||
# I successfully launched OSE Linux in an HVM Qube, but I left it idle for sometime & when I returned to it, it was entirely black & unresponsive :( | |||
# ... | |||
# the retry script for the hetzner1 uploads to glacier finished; all 3 failed again. I re-ran it. | |||
=Sat Jul 28, 2018= | =Sat Jul 28, 2018= |
Revision as of 17:38, 7 August 2018
My work log from the year 2018 Quarter 3. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.
See Also
Wed Aug 01, 2018
- Marcin responded to me about phplist. ose does prefer a free floss solution. I think I understand though, despite the cost, why most low-budget nonprofits choose mailchimp. Their analytics & a/b tests provide the development/marketing teams essential tools in increase visibility, clicks, and donations that justify the costs. So I'll proceed by researching phplist and its floss alternatives to see which provides the best analytics and a/b tests
- Marcin asked me to build d3d.opensourceecology.org as my highest priority task. I said I could do this by mid-next-week, and I asked Catarina to buy the latest version of the theme she wants & send me the zip, which she said she would achieve within a few days.
- first step: dns. I created d3d.opensourceecology.org in our cloudflare account to point to 138.201.84.243
- cloudlfare wouldn't let me login without entering a 2fa token sent (insecurely) via email to our shared cloudflare@opensourceecology.org account
- gmail wouldn't let me login to the above email address for the same reason. They wanted me to enter a phone number for a 2fa token to be sent (insecurely) via sms
- this shit is fucking annoying. I went to the g suite as superuser to see if I could permanently disable these god damn bullshit 2fa requests from gmail. I found that Users -> Cloud Flare -> Security had a button to "Turn off for 10 mins" next to the "Login Challenge" section with the description "Turn off identity questions for 10 minutes after a suspicious attempt to sign in. Disabling login challenge will make the account less secure"
- we have a fucking 100-character, uniuqe password securely stored in a keepass. I understand what google is doing, but that's for people who reuse shitty passwords for many sites. We should be able to disable this fucking bug indefinitely on a per-account basis >:O
- finally, after the temp disable, I was able to login to the gmail, get the token for cloudlfare, login to cloudflare, and then add the dns record for 'd3d.opensourceecology.org'
- I generated a 70-character password for a new d3d_user mysql user and added it to keepass
- I went to create the cert, but certbot failed!
[root@hetzner2 htdocs]# certbot certificates Traceback (most recent call last): File "/bin/certbot", line 9, in <module> load_entry_point('certbot==0.19.0', 'console_scripts', 'certbot')() File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 479, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2703, in load_entry_point return ep.load() File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2321, in load return self.resolve() File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2327, in resolve module = import(self.module_name, fromlist=['name'], level=0) File "/usr/lib/python2.7/site-packages/certbot/main.py", line 19, in <module> from certbot import client File "/usr/lib/python2.7/site-packages/certbot/client.py", line 11, in <module> from acme import client as acme_client File "/usr/lib/python2.7/site-packages/acme/client.py", line 31, in <module> requests.packages.urllib3.contrib.pyopenssl.inject_into_urllib3() # type: ignore File "/usr/lib/python2.7/site-packages/urllib3/contrib/pyopenssl.py", line 118, in inject_into_urllib3 _validate_dependencies_met() File "/usr/lib/python2.7/site-packages/urllib3/contrib/pyopenssl.py", line 153, in _validate_dependencies_met raise ImportError("'pyOpenSSL' module missing required functionality. " ImportError: 'pyOpenSSL' module missing required functionality. Try upgrading to v0.14 or newer. [root@hetzner2 htdocs]#
- I (foolishly) was playing with pip earlier to get b2 working. I think this is why. I fucking hate pip, and shouldn't have touched it on a production box. https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q3#Tue_Jul_17.2C_2018
- in effort to get b2 working, I installed pip from the yum repo, had pip update itself, installed setuptools>=20.2 from pip, and finally installed this long list via pip"six, python-dateutil, backports.functools-lru-cache, arrow, funcsigs, logfury, certifi, chardet, urllib3, idna, requests, tqdm, futures, b2"
- even ^ that failed (fucking pip) so I resorted to installing from git
- I removed the pip-installed certbot and related depends, then installed with yum
pip uninstall certbot cerbot-apache certbot-nginx requests six urllib3 acme yum install python-requests python-six python-urllib3
- this got me a new error with acme
[root@hetzner2 htdocs]# certbot certificates Traceback (most recent call last): File "/bin/certbot", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3098, in <module> @_call_aside File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3082, in _call_aside f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3111, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 573, in _build_master ws.require(requires) File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 891, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 777, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'acme>0.24.0' distribution was not found and is required by certbot [root@hetzner2 htdocs]#
- I'm just going to fucking uinstall pip and all its packages
[root@hetzner2 htdocs]# packages=$(pip list 2>&1 | tail -n+3 | head -n-2 | awk '{print $1}') [root@hetzner2 htdocs]# echo $packages arrow b2 backports.functools-lru-cache backports.ssl-match-hostname boto certbot certifi cffi chardet ConfigArgParse configobj cryptography decorator duplicity enum34 fasteners funcsigs future futures GnuPGInterface google-api-python-client httplib2 idna iniparse ipaddress IPy iso8601 javapackages josepy keyring lockfile logfury lxml mock ndg-httpsclient oauth2client paramiko parsedatetime perf pexpect pip ply policycoreutils-default-encoding psutil pyasn1 pyasn1-modules pycparser pycurl PyDrive pygobject pygpgme pyliblzma pyOpenSSL pyparsing pyRFC3339 python-augeas python-dateutil python-gflags python-linux-procfs python2-pythondialog pytz pyudev pyxattr PyYAML requests requests-toolbelt rsa schedutils seobject sepolicy setuptools slip slip.dbus SQLAlchemy tqdm uritemplate urlgrabber yum-metadata-parser zope.component zope.event zope.interface [root@hetzner2 htdocs]# [root@hetzner2 htdocs]# for p in $packages; do pip uninstall $p; done [root@hetzner2 htdocs]# yum remove python-pip ... Removed: python2-pip.noarch 0:8.1.2-6.el7 Complete! [root@hetzner2 htdocs]# yum install certbot ... Updated: certbot.noarch 0:0.26.1-1.el7 Dependency Updated: python2-certbot.noarch 0:0.26.1-1.el7 Complete! [root@hetzner2 htdocs]#
- I still have errors, but a new one. Fucking pip!
[root@hetzner2 htdocs]# certbot certificates Traceback (most recent call last): File "/bin/certbot", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3098, in <module> @_call_aside File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3082, in _call_aside f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3111, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 573, in _build_master ws.require(requires) File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 891, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 777, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'parsedatetime>=1.3' distribution was not found and is required by certbot [root@hetzner2 htdocs]#
- I slowly began to install these packages back from yum as needed, but I hit an issue with urrlib3. A find shows another version as installed by aws cli
[root@hetzner2 htdocs]# find / -name urllib3 /root/.local/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3 /root/.local/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3 [root@hetzner2 htdocs]# mv /root/.local /root/.local.bak [root@hetzner2 htdocs]#
- after install & move
ImportError: No module named 'requests.packages.urllib3' [root@hetzner2 htdocs]# find / -name urllib3 /root/.local.bak/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3 /root/.local.bak/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3 /usr/lib/python2.7/site-packages/urllib3 [root@hetzner2 htdocs]#
- here's all the packages; not sure why it can't fucking find urllib3
[root@hetzner2 htdocs]# ls /usr/lib/python2.7/site-packages/ acme GnuPGInterface.pyc python_augeas-0.5.0-py2.7.egg-info acme-0.25.1-py2.7.egg-info GnuPGInterface.pyo python_dateutil-2.7.3.dist-info ANSI.py html python_gflags-2.0-py2.7.egg-info ANSI.pyc http python_linux_procfs-0.4.9-py2.7.egg-info ANSI.pyo idna pytz augeas.py idna-2.4-py2.7.egg-info pytz-2016.10-py2.7.egg-info augeas.pyc iniparse pyudev augeas.pyo iniparse-0.4-py2.7.egg-info pyudev-0.15-py2.7.egg-info backports ipaddress-1.0.16-py2.7.egg-info queue backports.functools_lru_cache-1.2.1-py2.7.egg-info ipaddress.py repoze backports.functools_lru_cache-1.2.1-py2.7-nspkg.pth ipaddress.pyc repoze.lru-0.4-py2.7.egg-info builtins ipaddress.pyo repoze.lru-0.4-py2.7-nspkg.pth cached_property-1.3.0-py2.7.egg-info IPy-0.75-py2.7.egg-info reprlib cached_property.py IPy.py requests cached_property.pyc IPy.pyc requests-2.19.1-py2.7.egg cached_property.pyo IPy.pyo requests-2.6.0-py2.7.egg-info certbot josepy requests_toolbelt certbot-0.26.1-py2.7.egg-info josepy-1.1.0-py2.7.egg-info requests_toolbelt-0.8.0-py2.7.egg-info chardet jsonschema rpmUtils chardet-2.2.1-py2.7.egg-info jsonschema-2.5.1-py2.7.egg-info rsa ConfigArgParse-0.11.0-py2.7.egg-info libfuturize rsa-3.4.1-py2.7.egg-info configargparse.py libpasteurize screen.py configargparse.pyc List-1.3.0-py2.7.egg screen.pyc configargparse.pyo lockfile screen.pyo configobj-4.7.2-py2.7.egg-info lockfile-0.9.1-py2.7.egg-info setuptools configobj.py _markupbase setuptools-40.0.0.dist-info configobj.pyc mock-1.0.1-py2.7.egg-info six-1.9.0-py2.7.egg-info configobj.pyo mock.py six.py copyreg mock.pyc six.pyc dateutil mock.pyo six.pyo dialog.py ndg slip dialog.pyc ndg_httpsclient-0.3.2-py2.7.egg-info slip-0.4.0-py2.7.egg-info dialog.pyo ndg_httpsclient-0.3.2-py2.7-nspkg.pth slip.dbus-0.4.0-py2.7.egg-info docopt-0.6.2-py2.7.egg-info parsedatetime socketserver docopt.py parsedatetime-2.4-py2.7.egg-info texttable-1.3.1-py2.7.egg-info docopt.pyc past texttable.py docopt.pyo pexpect-2.3-py2.7.egg-info texttable.pyc _dummy_thread pexpect.py texttable.pyo easy-install.pth pexpect.pyc _thread easy_install.py pexpect.pyo tkinter easy_install.pyc pkg_resources tqdm-4.23.4-py2.7.egg enum ply tuned enum34-1.0.4-py2.7.egg-info ply-3.4-py2.7.egg-info uritemplate fdpexpect.py procfs uritemplate-3.0.0-py2.7.egg-info fdpexpect.pyc pxssh.py urlgrabber fdpexpect.pyo pxssh.pyc urlgrabber-3.10-py2.7.egg-info firewall pxssh.pyo urllib3 FSM.py pyasn1 urllib3-1.10.2-py2.7.egg-info FSM.pyc pyasn1-0.1.9-py2.7.egg-info validate.py FSM.pyo pyasn1_modules validate.pyc future pyasn1_modules-0.0.8-py2.7.egg-info validate.pyo future-0.16.0-py2.7.egg-info pycparser winreg gflags.py pycparser-2.14-py2.7.egg-info xmlrpc gflags.pyc pyparsing-1.5.6-py2.7.egg-info yum gflags.pyo pyparsing.py zope gflags_validators.py pyparsing.pyc zope.component-4.1.0-py2.7.egg-info gflags_validators.pyc pyparsing.pyo zope.component-4.1.0-py2.7-nspkg.pth gflags_validators.pyo pyrfc3339 zope.event-4.0.3-py2.7.egg-info GnuPGInterface-0.3.2-py2.7.egg-info pyRFC3339-1.0-py2.7.egg-info zope.event-4.0.3-py2.7-nspkg.pth GnuPGInterface.py python2_pythondialog-3.3.0-py2.7.egg-info [root@hetzner2 htdocs]#
- I uninstalled a lot of packages, then did the `ls` again. I've seen some coorelation between urllib3 & requests, so maybe it's this lingering requests dir...
[root@hetzner2 htdocs]# yum remove python-parsedatetime python-mock python-josepy python-cryptography python-configargparse python-future python-six python-idna python-requests python-chardet python-requests-toolbelt python-urllib3 pyOpenSSL ... [root@hetzner2 htdocs]# ls /usr/lib/python2.7/site-packages/ ANSI.py GnuPGInterface-0.3.2-py2.7.egg-info python_augeas-0.5.0-py2.7.egg-info ANSI.pyc GnuPGInterface.py python_dateutil-2.7.3.dist-info ANSI.pyo GnuPGInterface.pyc python_gflags-2.0-py2.7.egg-info augeas.py GnuPGInterface.pyo python_linux_procfs-0.4.9-py2.7.egg-info augeas.pyc iniparse pytz augeas.pyo iniparse-0.4-py2.7.egg-info pytz-2016.10-py2.7.egg-info backports ipaddress-1.0.16-py2.7.egg-info pyudev backports.functools_lru_cache-1.2.1-py2.7.egg-info ipaddress.py pyudev-0.15-py2.7.egg-info backports.functools_lru_cache-1.2.1-py2.7-nspkg.pth ipaddress.pyc repoze cached_property-1.3.0-py2.7.egg-info ipaddress.pyo repoze.lru-0.4-py2.7.egg-info cached_property.py IPy-0.75-py2.7.egg-info repoze.lru-0.4-py2.7-nspkg.pth cached_property.pyc IPy.py requests-2.19.1-py2.7.egg cached_property.pyo IPy.pyc rpmUtils configobj-4.7.2-py2.7.egg-info IPy.pyo rsa configobj.py jsonschema rsa-3.4.1-py2.7.egg-info configobj.pyc jsonschema-2.5.1-py2.7.egg-info screen.py configobj.pyo List-1.3.0-py2.7.egg screen.pyc dateutil lockfile screen.pyo dialog.py lockfile-0.9.1-py2.7.egg-info setuptools dialog.pyc pexpect-2.3-py2.7.egg-info setuptools-40.0.0.dist-info dialog.pyo pexpect.py slip docopt-0.6.2-py2.7.egg-info pexpect.pyc slip-0.4.0-py2.7.egg-info docopt.py pexpect.pyo slip.dbus-0.4.0-py2.7.egg-info docopt.pyc pkg_resources texttable-1.3.1-py2.7.egg-info docopt.pyo ply texttable.py easy-install.pth ply-3.4-py2.7.egg-info texttable.pyc easy_install.py procfs texttable.pyo easy_install.pyc pxssh.py tqdm-4.23.4-py2.7.egg enum pxssh.pyc tuned enum34-1.0.4-py2.7.egg-info pxssh.pyo uritemplate fdpexpect.py pyasn1 uritemplate-3.0.0-py2.7.egg-info fdpexpect.pyc pyasn1-0.1.9-py2.7.egg-info urlgrabber fdpexpect.pyo pyasn1_modules urlgrabber-3.10-py2.7.egg-info firewall pyasn1_modules-0.0.8-py2.7.egg-info validate.py FSM.py pycparser validate.pyc FSM.pyc pycparser-2.14-py2.7.egg-info validate.pyo FSM.pyo pyparsing-1.5.6-py2.7.egg-info yum gflags.py pyparsing.py zope gflags.pyc pyparsing.pyc zope.component-4.1.0-py2.7.egg-info gflags.pyo pyparsing.pyo zope.component-4.1.0-py2.7-nspkg.pth gflags_validators.py pyrfc3339 zope.event-4.0.3-py2.7.egg-info gflags_validators.pyc pyRFC3339-1.0-py2.7.egg-info zope.event-4.0.3-py2.7-nspkg.pth gflags_validators.pyo python2_pythondialog-3.3.0-py2.7.egg-info [root@hetzner2 htdocs]# find / -name requests /root/.local.bak/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests /root/.local.bak/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests /usr/lib/python2.7/site-packages/requests-2.19.1-py2.7.egg/requests [root@hetzner2 htdocs]#
- fucking hell pip. I never should have played with the devil here. Pip is the god damn devil. Let's try to import it ourselves.
[maltfield@hetzner2 ~]$ date Wed Aug 1 21:03:41 UTC 2018 [maltfield@hetzner2 ~]$ pwd /home/maltfield [maltfield@hetzner2 ~]$ which python /usr/bin/python [maltfield@hetzner2 ~]$ python Python 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import math; >>> import requests; Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/site-packages/requests/init.py", line 58, in <module> from . import utils File "/usr/lib/python2.7/site-packages/requests/utils.py", line 32, in <module> from .exceptions import InvalidURL File "/usr/lib/python2.7/site-packages/requests/exceptions.py", line 10, in <module> from .packages.urllib3.exceptions import HTTPError as BaseHTTPError File "/usr/lib/python2.7/site-packages/requests/packages/init.py", line 95, in load_module raise ImportError("No module named '%s'" % (name,)) ImportError: No module named 'requests.packages.urllib3'
- the actual error is in /usr/lib/python2.7/site-packages/requests/exceptions.py on line 10
from .packages.urllib3.exceptions import HTTPError as BaseHTTPError
- using `rpm -V`, I got a list of corrupted python packages
[root@hetzner2 ~]# for package in $(rpm -qa | grep -i python); do if `rpm -V $package` ; then echo $package; fi; done python-javapackages-3.4.1-11.el7.noarch python2-iso8601-0.1.11-7.el7.noarch python-httplib2-0.9.2-1.el7.noarch python2-keyring-5.0-3.el7.noarch python-backports-1.0-8.el7.x86_64 python-decorator-3.4.0-3.el7.noarch python-lxml-3.2.1-4.el7.x86_64 python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch [root@hetzner2 ~]#
- I tried to remove & re-install these packages
[root@hetzner2 ~]# yum remove python-javapckages python2-iso8601 python-httplib2 python2-keyring python-backports python-decorator python-lxml python-backports-ssl_match_hostname ... [root@hetzner2 ~]# yum install python-javapckages python2-iso8601 python-httplib2 python2-keyring python-backports python-decorator python-lxml python-backports-ssl_match_hostname certbot jitsi java-1.8.0-openjdk tuned
- finally, it works again! Lesson learned: never, ever use pip.
[root@hetzner2 ~]# certbot Saving debug log to /var/log/letsencrypt/letsencrypt.log Certbot doesn't know how to automatically configure the web server on this system. However, it can still get a certificate for you. Please run "certbot certonly" to do so. You'll need to manually configure your web server to use the resulting certificate. [root@hetzner2 ~]# certbot certificates Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Found the following certs: Certificate Name: opensourceecology.org Domains: opensourceecology.org awstats.opensourceecology.org fef.opensourceecology.org forum.opensourceecology.org munin.opensourceecology.org oswh.opensourceecology.org staging.opensourceecology.org wiki.opensourceecology.org www.opensourceecology.org Expiry Date: 2018-08-22 21:36:52+00:00 (VALID: 21 days) Certificate Path: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem Private Key Path: /etc/letsencrypt/live/opensourceecology.org/privkey.pem Certificate Name: openbuildinginstitute.org Domains: www.openbuildinginstitute.org awstats.openbuildinginstitute.org openbuildinginstitute.org seedhome.openbuildinginstitute.org Expiry Date: 2018-09-11 03:20:08+00:00 (VALID: 40 days) Certificate Path: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem Private Key Path: /etc/letsencrypt/live/openbuildinginstitute.org/privkey.pem - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - [root@hetzner2 ~]#
- undoing my changes during testing, I'll move the /root/.local dir back
[root@hetzner2 ~]# aws -bash: aws: command not found [root@hetzner2 ~]# mv .local.bak .local [root@hetzner2 ~]# aws usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help aws: error: too few arguments [root@hetzner2 ~]#
- all packages look good, and `certbot` still works!
[root@hetzner2 ~]# for package in $(rpm -qa | grep -i python); do if `rpm -V $package` ; then echo $package; fi; done[root@hetzner2 ~]# certbot certificates Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Found the following certs: Certificate Name: opensourceecology.org Domains: opensourceecology.org awstats.opensourceecology.org fef.opensourceecology.org forum.opensourceecology.org munin.opensourceecology.org oswh.opensourceecology.org staging.opensourceecology.org wiki.opensourceecology.org www.opensourceecology.org Expiry Date: 2018-08-22 21:36:52+00:00 (VALID: 21 days) Certificate Path: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem Private Key Path: /etc/letsencrypt/live/opensourceecology.org/privkey.pem Certificate Name: openbuildinginstitute.org Domains: www.openbuildinginstitute.org awstats.openbuildinginstitute.org openbuildinginstitute.org seedhome.openbuildinginstitute.org Expiry Date: 2018-09-11 03:20:08+00:00 (VALID: 40 days) Certificate Path: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem Private Key Path: /etc/letsencrypt/live/openbuildinginstitute.org/privkey.pem - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - [root@hetzner2 ~]#
- b2 is no longer accessible
[root@hetzner2 ~]# su - b2user Last login: Sat Jul 28 19:48:19 UTC 2018 on pts/105 [b2user@hetzner2 ~]$ b2 -bash: b2: command not found [b2user@hetzner2 ~]$
- I went to install b2 for the b2user, but it complains about setuptools and tells me to use pip to get setuptools >=20.2. Fuck that!
[b2user@hetzner2 ~]$ mkdir sandbox [b2user@hetzner2 ~]$ cd sandbox/ [b2user@hetzner2 sandbox]$ git clone https://github.com/Backblaze/B2_Command_Line_Tool.git Cloning into 'B2_Command_Line_Tool'... remote: Counting objects: 6010, done. remote: Total 6010 (delta 0), reused 0 (delta 0), pack-reused 6009 Receiving objects: 100% (6010/6010), 1.49 MiB | 1.65 MiB/s, done. Resolving deltas: 100% (4342/4342), done. [b2user@hetzner2 sandbox]$ cd B2_Command_Line_Tool/ [b2user@hetzner2 B2_Command_Line_Tool]$ python setup.py install setuptools 20.2 or later is required. To fix, try running: pip install "setuptools>=20.2" [b2user@hetzner2 B2_Command_Line_Tool]$
- setuptools from yum is 0.9.8-7
[root@hetzner2 site-packages]# rpm -qa | grep setuptools python-setuptools-0.9.8-7.el7.noarch [root@hetzner2 site-packages]#
- I'll have to fix b2 for the b2user. Perhaps I can figure out how to setup a separate env. I'll also have to finish setting up the d3d site, but at least certbot is fixed. That could have been nasty if our cert had expired with `certbot` broken; all the sites would become inaccessible after 90 days! Fuck pip. Fuck pip. Fuck pip. Stick to the distro package manager. Always stick to the distro package manager.
- I went ahead and ran the letsencrypt renew script--which had been failing before. It renewed the ose cert.
[root@hetzner2 site-packages]# /root/bin/letsencrypt/renew.sh Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/opensourceecology.org.conf - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Cert is due for renewal, auto-renewing... Plugins selected: Authenticator webroot, Installer None Starting new HTTPS connection (1): acme-v02.api.letsencrypt.org Renewing an existing certificate Performing the following challenges: http-01 challenge for awstats.opensourceecology.org http-01 challenge for fef.opensourceecology.org http-01 challenge for forum.opensourceecology.org http-01 challenge for munin.opensourceecology.org http-01 challenge for opensourceecology.org http-01 challenge for oswh.opensourceecology.org http-01 challenge for staging.opensourceecology.org http-01 challenge for wiki.opensourceecology.org http-01 challenge for www.opensourceecology.org Waiting for verification... Cleaning up challenges - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - new certificate deployed without reload, fullchain is /etc/letsencrypt/live/opensourceecology.org/fullchain.pem - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/openbuildinginstitute.org.conf - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Cert not yet due for renewal - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - The following certs are not due for renewal yet: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem expires on 2018-09-11 (skipped) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem (success) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Redirecting to /bin/systemctl reload nginx.service [root@hetzner2 site-packages]#
- I went to www.opensourceecology.org in my browser and confirmed that the cert listed was issued from today's date (2018-08-01)
Tue Jul 31, 2018
- Marcin asked me if we should accept ether. I said we should (if his hardware wallet supports ether), but that we should not think that ether is as a hedge against bitcoin failing.
- the glacier upload failed again for all 3x files. I threw this damn thing in a while loop; it will stop attempting to re-upload after the files are deleted by the same retry script
[root@hetzner2 ~]# while grep -E 'gpg$'` ; do date; /root/bin/retryUploadToGlacier.sh; sleep 1; done Tue Jul 31 23:37:45 UTC 2018 + backupArchives='/var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_oseblog.tar.gpg /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osemain.tar.gpg' ...
- did some research into phplist vs mailchimp. Sent Marcin an email asking why he chose phplist, if that research is final, if I should continue this research, or if I should just plunge forward in installing/configuring phplist.
- mailchimp is probably the most popular option. It's free up to 2,000 subscribers and 12,000 emails per month https://mailchimp.com/pricing/
- ...
- going down my TODO items, there's CODE https://wiki.opensourceecology.org/wiki/Google_docs_alternative
- at last look, we couldn't use them because there was no support for hyperlinks in impress/powerpoint. They fixed this 4 months after I submitted a bug report requesting the feature.
- I made another request for drawing arrows/lines/shapes, but there's been no status updates here https://bugs.documentfoundation.org/show_bug.cgi?id=113386
- I added a comment asking for an ETA
- ...
- I sent Marcin a follow-up email asking if he had a chance to test Janus yet. This is blocking my Jitsi poc.
- ...
- the only other task I can think of is mantis as a bug tracker. The question is: does itr meet Marcin's requirements. Specifically, he wanted to have an "issue" track the progress of each machine in an automated fashion using templates. For example, we'd want an easy button to generate a "Development Template" such as this one https://docs.google.com/spreadsheets/d/1teVrReHnbS1xQFDJJhhmJX5_Lz2d5WkhJlq5yyeIfQw/edit#gid=1430144236
- it looks like this may already exist simply by using the "create clone" button on mantis https://www.mantisbt.org/bugs/view.php?id=8308
Mon Jul 30, 2018
- I checked on the status of the backups of hetzner1 being uploaded to glacier. Apparently the upload step failed.
[root@hetzner2 bin]# ./uploadToGlacier.sh + backupDirs='/var/tmp/deprecateHetzner1/microft /var/tmp/deprecateHetzner1/oseforum /var/tmp/deprecateHetzner1/osemain /var/tmp/deprecateHetzner1/osecivi /var/tmp/deprecateHetzner1/oseblog' + syncDir=/var/tmp/deprecateHetzner1/sync/ ... + gpg --symmetric --cipher-algo aes --batch --passphrase-file /root/backups/ose-backups-cron.key /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar + rm /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar + /root/bin/glacier.py --region us-west-2 archive upload deleteMeIn2020 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg Traceback (most recent call last): File "/root/bin/glacier.py", line 736, in <module> main() File "/root/bin/glacier.py", line 732, in main App().main() File "/root/bin/glacier.py", line 718, in main self.args.func() File "/root/bin/glacier.py", line 500, in archive_upload file_obj=self.args.file, description=name) File "/usr/lib/python2.7/site-packages/boto/glacier/vault.py", line 177, in create_archive_from_file writer.write(data) File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 219, in write self.partitioner.write(data) File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 61, in write self._send_part() File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 75, in _send_part self.send_fn(part) File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 222, in _upload_part self.uploader.upload_part(self.next_part_index, part_data) File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 129, in upload_part content_range, part_data) File "/usr/lib/python2.7/site-packages/boto/glacier/layer1.py", line 1279, in upload_part response_headers=response_headers) File "/usr/lib/python2.7/site-packages/boto/glacier/layer1.py", line 119, in make_request raise UnexpectedHTTPResponseError(ok_responses, response) boto.glacier.exceptions.UnexpectedHTTPResponseError: Expected 204, got (408, code=RequestTimeoutException, message=Request timed out.) + 1 -eq 0
- that last line above was a check to see if the upload was successful. I forgot about this issue (god damn it glacier). Anyway, I had coded this in-place so that successful upload attempts would have their file deleted; failed files would still exist. A file listing shows that 3 of the 5 uploads failed
[root@hetzner2 sync]# date Mon Jul 30 19:43:19 UTC 2018 [root@hetzner2 sync]# pwd /var/tmp/deprecateHetzner1/sync [root@hetzner2 sync]# ls -lah total 14G drwxr-xr-x 2 root root 4.0K Jul 28 23:34 . drwx------ 10 root root 4.0K Jul 28 22:26 .. -rw-r--r-- 1 root root 810K Jul 28 22:37 hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2 -rw-r--r-- 1 root root 6.2G Jul 28 22:41 hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg -rw-r--r-- 1 root root 4.0M Jul 28 23:31 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2 -rw-r--r-- 1 root root 4.4G Jul 28 23:34 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.tar.gpg -rw-r--r-- 1 root root 100K Jul 28 23:26 hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2 -rw-r--r-- 1 root root 102K Jul 28 23:03 hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2 -rw-r--r-- 1 root root 187K Jul 28 23:13 hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2 -rw-r--r-- 1 root root 3.3G Jul 28 23:14 hetzner1_final_backup_before_hetzner1_deprecation_osemain.tar.gpg [root@hetzner2 sync]#
- I copied the script hetzner1:/home/marcin_ose/backups/retryUploadToGlacier.sh hetzner2:/root/bin/retryUploadToGlacier.sh
- I changed the archive list to just those 3 failed listed above.
- I executed 'retryUploadToGlacier.sh'. Hopefully it'll be finished by tomorrow.
- ...
- I successfully launched OSE Linux in an HVM Qube, but I left it idle for sometime & when I returned to it, it was entirely black & unresponsive :(
- ...
- the retry script for the hetzner1 uploads to glacier finished; all 3 failed again. I re-ran it.
Sat Jul 28, 2018
- Updated the existing backup.sh script to exclude '/home/b2user/sync', which is now defined as $b2StagingDir in '/root/backups/backup.settings'
- I began to wonder how we can throttle these uploads to b2. Previously, we were using rsync's 'bwlimit' argument to limit our backup process's upload to dreamhost at 3 MB/s. It doesn't appear to be an existing option for b2. Unfortunately, Googling for info about how to throttle backblaze uploads leads to client's claiming that backblaze was throttling their uploads on _their_ end (mostly for the unlimited home user's accounts--also denied by backblaze). Ironic, but we _do_ want to throttle, albeit on our end.
- looks like this was requested over a year ago (2016-12), but is unassigned. The stated workaround is to use trickle, it looks liked we'd want something like `trickle -s -u 3000 b2 upload-file --threads 1 ose-server-backups <sourceFilePath> <destFilePath>` https://github.com/Backblaze/B2_Command_Line_Tool/issues/310
- I installed trickle
[root@hetzner2 backblaze]# which trickle /usr/bin/which: no trickle in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) [root@hetzner2 backblaze]# yum install trickle Loaded plugins: fastestmirror, replace Loading mirror speeds from cached hostfile * base: mirror.wiuwiu.de * epel: mirror.wiuwiu.de * extras: mirror.wiuwiu.de * updates: mirror.wiuwiu.de * webtatic: uk.repo.webtatic.com Resolving Dependencies --> Running transaction check ---> Package trickle.x86_64 0:1.07-19.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ======================================================================================================================================================================================== Package Arch Version Repository Size ======================================================================================================================================================================================== Installing: trickle x86_64 1.07-19.el7 epel 48 k Transaction Summary ======================================================================================================================================================================================== Install 1 Package Total download size: 48 k Installed size: 103 k Is this ok [y/d/N]: y Downloading packages: trickle-1.07-19.el7.x86_64.rpm | 48 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : trickle-1.07-19.el7.x86_64 1/1 Verifying : trickle-1.07-19.el7.x86_64 1/1 Installed: trickle.x86_64 0:1.07-19.el7 Complete! [root@hetzner2 backblaze]#
[root@hetzner2 backblaze]# which trickle /bin/trickle [root@hetzner2 backblaze]#
- made many changes to backup2.sh, and here's the output of the first completed test run
[root@hetzner2 backups]# ./backup2.sh ================================================================================ Beginning Backup Run on 20180728_202957 /bin/tar: Removing leading `/' from member names real 0m0.902s user 0m0.911s sys 0m0.057s /bin/tar: Removing leading `/' from member names /root/backups/sync/hetzner2_20180728_202957/etc/ /root/backups/sync/hetzner2_20180728_202957/etc/etc.20180728_202957.tar.gz real 0m0.010s user 0m0.000s sys 0m0.010s real 0m0.494s user 0m0.479s sys 0m0.012s /home/b2user/sync/hetzner2_20180728_202957.tar.gpg: 100%|| 11.2M/11.2M [00:05<00:00, 1.89MB/s] URL by file name: https://f001.backblazeb2.com/file/ose-server-backups/hetzner2_20180728_202957.tar.gpg URL by fileId: https://f001.backblazeb2.com/b2api/v1/b2_download_file_by_id?fileId=4_z5605817c251dadb96e4d0118_f114b5017aac233bd_d20180728_m203000_c001_v0001106_t0015 { "action": "upload", "fileId": "4_z5605817c251dadb96e4d0118_f114b5017aac233bd_d20180728_m203000_c001_v0001106_t0015", "fileName": "hetzner2_20180728_202957.tar.gpg", "size": 11226387, "uploadTimestamp": 1532809800000 } [root@hetzner2 backups]#
- next I logged into the backblaze wui, and I browsed the files in the bucket. Sure enough, I see the 11.2M file named '
hetzner2_20180728_202957.tar.gpg'
- this time I wanted to test restoring from the wui, so I downloaded this file (hetzner2_20180728_202957.tar.gpg) straight from the browser, then decrypted it on my laptop
root@ose:/home/user/tmp/backblaze/restore.2018-07# date Sat Jul 28 16:49:26 EDT 2018 root@ose:/home/user/tmp/backblaze/restore.2018-07# pwd /home/user/tmp/backblaze/restore.2018-07 root@ose:/home/user/tmp/backblaze/restore.2018-07# ls -lah total 11M drwxrwxr-x 2 root root 4.0K Jul 28 16:49 . drwxrwxr-x 3 root root 4.0K Jul 28 16:49 .. -rw-r--r-- 1 user user 11M Jul 28 16:47 hetzner2_20180728_202957.tar.gpg root@ose:/home/user/tmp/backblaze/restore.2018-07# root@ose:/home/user/tmp/backblaze/restore.2018-07# gpg --output hetzner2_20180728_202957.tar --batch --passphrase-file /home/user/keepass/ose-backups-cron.key --decrypt hetzner2_20180728_202957.tar.gpg gpg: AES256 encrypted data gpg: encrypted with 1 passphrase root@ose:/home/user/tmp/backblaze/restore.2018-07# ls -lah total 22M drwxrwxr-x 2 root root 4.0K Jul 28 16:50 . drwxrwxr-x 3 root root 4.0K Jul 28 16:49 .. -rw-rw-r-- 1 root root 11M Jul 28 16:50 hetzner2_20180728_202957.tar -rw-r--r-- 1 user user 11M Jul 28 16:47 hetzner2_20180728_202957.tar.gpg root@ose:/home/user/tmp/backblaze/restore.2018-07# root@ose:/home/user/tmp/backblaze/restore.2018-07# tar -xvf hetzner2_20180728_202957.tar root/backups/sync/hetzner2_20180728_202957/etc/ root/backups/sync/hetzner2_20180728_202957/etc/etc.20180728_202957.tar.gz root@ose:/home/user/tmp/backblaze/restore.2018-07# ls -lah total 22M drwxrwxr-x 3 root root 4.0K Jul 28 16:51 . drwxrwxr-x 3 root root 4.0K Jul 28 16:49 .. -rw-rw-r-- 1 root root 11M Jul 28 16:50 hetzner2_20180728_202957.tar -rw-r--r-- 1 user user 11M Jul 28 16:47 hetzner2_20180728_202957.tar.gpg drwxrwxr-x 3 root root 4.0K Jul 28 16:51 root root@ose:/home/user/tmp/backblaze/restore.2018-07# root@ose:/home/user/tmp/backblaze/restore.2018-07# cd root/backups/sync/hetzner2_20180728_202957/etc/ root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# ls -lah total 11M drwxr-xr-x 2 root root 4.0K Jul 28 16:29 . drwxrwxr-x 3 root root 4.0K Jul 28 16:51 .. -rw-r--r-- 1 root root 11M Jul 28 16:29 etc.20180728_202957.tar.gz root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# tar -xzvf etc.20180728_202957.tar.gz ... root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# ls -lah total 11M drwxr-xr-x 3 root root 4.0K Jul 28 16:51 . drwxrwxr-x 3 root root 4.0K Jul 28 16:51 .. drwxrwxr-x 98 root root 12K Jul 28 16:53 etc -rw-r--r-- 1 root root 11M Jul 28 16:29 etc.20180728_202957.tar.gz root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# ls -lah etc total 2.0M drwxrwxr-x 98 root root 12K Jul 28 16:53 . drwxr-xr-x 3 root root 4.0K Jul 28 16:51 .. drwxr-xr-x 4 root root 4.0K Aug 3 2017 acpi -rw-r--r-- 1 root root 16 Dec 15 2015 adjtime -rw-r--r-- 1 root root 1.5K Jun 7 2013 aliases -rw-r--r-- 1 root root 12K Jul 10 2017 aliases.db drwxr-xr-x 2 root root 4.0K Sep 22 2017 alternatives -rw------- 1 root root 541 Aug 3 2017 anacrontab -rw-r--r-- 1 root root 55 Mar 1 2017 asound.conf drwxr-x--- 3 root root 4.0K Sep 22 2017 audisp drwxr-x--- 3 root root 4.0K Sep 22 2017 audit drwxr-xr-x 2 root root 4.0K Feb 20 14:42 awstats drwxr-xr-x 2 root root 4.0K Oct 13 2016 bacula drwxr-xr-x 2 root root 4.0K Sep 22 2017 bash_completion.d -rw-r--r-- 1 root root 2.8K Nov 5 2016 bashrc drwxr-xr-x 2 root root 4.0K Sep 6 2017 binfmt.d drwxr-xr-x 2 root root 4.0K Mar 26 08:49 cacti -rw-r--r-- 1 root root 38 Aug 30 2017 centos-release -rw-r--r-- 1 root root 51 Aug 30 2017 centos-release-upstream drwxr-xr-x 2 root root 4.0K Aug 4 2017 chkconfig.d -rw-r--r-- 1 root root 1.1K Jan 31 2017 chrony.conf -rw-r----- 1 root 994 62 Dec 15 2015 chrony.keys -rw-r----- 1 root 994 481 Jan 31 2017 chrony.keys.rpmnew drwxr-xr-x 2 root root 4.0K Apr 12 05:19 cron.d drwxr-xr-x 2 root root 4.0K Sep 22 2017 cron.daily -rw------- 1 root root 0 Aug 3 2017 cron.deny drwxr-xr-x 2 root root 4.0K Jan 24 2018 cron.hourly drwxr-xr-x 2 root root 4.0K Jun 9 2014 cron.monthly -rw-r--r-- 1 root root 451 Jun 9 2014 crontab drwxr-xr-x 2 root root 4.0K Jun 9 2014 cron.weekly -rw------- 1 root root 0 Dec 15 2015 crypttab -rw-r--r-- 1 root root 1.6K Nov 5 2016 csh.cshrc -rw-r--r-- 1 root root 841 Jun 7 2013 csh.login drwxr-xr-x 4 root root 4.0K Jul 9 2017 dbus-1 drwxr-xr-x 2 root root 4.0K Sep 22 2017 default drwxr-xr-x 2 root root 4.0K Sep 22 2017 depmod.d drwxr-x--- 4 root root 4.0K Sep 22 2017 dhcp -rw-r--r-- 1 root root 5.0K Nov 4 2016 DIR_COLORS -rw-r--r-- 1 root root 5.6K Nov 4 2016 DIR_COLORS.256color -rw-r--r-- 1 root root 4.6K Nov 4 2016 DIR_COLORS.lightbgcolor -rw-r--r-- 1 root root 1.3K Aug 5 2017 dracut.conf drwxr-xr-x 2 root root 4.0K Aug 5 2017 dracut.conf.d drwxr-xr-x 2 root root 4.0K Jul 17 22:00 duplicity -rw-r--r-- 1 root root 112 Mar 16 2017 e2fsck.conf -rw-r--r-- 1 root root 0 Nov 5 2016 environment -rw-r--r-- 1 root root 1.3K Nov 5 2016 ethertypes -rw-r--r-- 1 root root 0 Jun 7 2013 exports lrwxrwxrwx 1 root root 56 Apr 11 2016 favicon.png -> /usr/share/icons/hicolor/16x16/apps/fedora-logo-icon.png -rw-r--r-- 1 root root 70 Nov 5 2016 filesystems drwxr-xr-x 3 root root 4.0K Sep 22 2017 fonts -rw-r--r-- 1 root root 226 May 31 2016 fstab drwxr-xr-x 2 root root 4.0K Aug 2 2017 gcrypt -rw-r--r-- 1 root root 842 Nov 5 2016 GeoIP.conf -rw-r--r-- 1 root root 858 Nov 5 2016 GeoIP.conf.default drwxr-xr-x 2 root root 4.0K Nov 5 2016 gnupg -rw-r--r-- 1 root root 94 Mar 24 2017 GREP_COLORS drwxr-xr-x 4 root root 4.0K Jun 9 2014 groff -rw-r--r-- 1 root root 1.1K Jul 27 22:47 group -rw-r--r-- 1 root root 1.1K Jun 6 14:13 group- lrwxrwxrwx 1 root root 22 Sep 22 2017 grub2.cfg -> ../boot/grub2/grub.cfg drwx------ 2 root root 4.0K Sep 22 2017 grub.d ---------- 1 root root 897 Jul 27 22:47 gshadow ---------- 1 root root 886 Mar 2 21:44 gshadow- drwxr-xr-x 3 root root 4.0K Aug 3 2017 gss drwxr-xr-x 2 root root 4.0K Oct 21 2017 hitch -rw-r--r-- 1 root root 9 Jun 7 2013 host.conf -rw-r--r-- 1 root root 31 Jul 9 2017 hostname -rw-r--r-- 1 root root 758 Oct 9 2017 hosts -rw-r--r-- 1 root root 370 Jun 7 2013 hosts.allow -rw-r--r-- 1 root root 479 Jul 28 16:29 hosts.deny drwxr-xr-x 6 root root 4.0K Feb 13 23:05 httpd -rw-r--r-- 1 root root 27K Sep 4 2017 httpd.20170904.tar.gz lrwxrwxrwx 1 root root 11 Sep 22 2017 init.d -> rc.d/init.d -rw-r--r-- 1 root root 511 Aug 3 2017 inittab -rw-r--r-- 1 root root 942 Jun 7 2013 inputrc drwxr-xr-x 2 root root 4.0K Sep 22 2017 iproute2 -rw-r--r-- 1 root root 23 Aug 30 2017 issue -rw-r--r-- 1 root root 22 Aug 30 2017 issue.net drwxr-xr-x 3 root root 4.0K Dec 1 2016 java drwxr-xr-x 2 root root 4.0K Nov 20 2015 jvm drwxr-xr-x 2 root root 4.0K Nov 20 2015 jvm-commmon -rw-r--r-- 1 root root 7.1K Sep 22 2017 kdump.conf drwxrwx--- 2 root 993 4.0K Jul 17 19:25 keepass -rw-r--r-- 1 root root 590 Apr 28 2017 krb5.conf drwxr-xr-x 2 root root 4.0K Aug 3 2017 krb5.conf.d -rw-r--r-- 1 root root 32K Jul 17 22:00 ld.so.cache -rw-r--r-- 1 root root 28 Feb 27 2013 ld.so.conf drwxr-xr-x 2 root root 4.0K Oct 3 2017 ld.so.conf.d drwxr-xr-x 9 root root 4.0K Jul 13 00:20 letsencrypt -r-------- 1 root root 39K Oct 8 2017 letsencrypt.20171028.tar.gz -rw-r----- 1 root root 191 Apr 19 2017 libaudit.conf drwxr-xr-x 5 root root 4.0K Aug 9 2017 libreport -rw-r--r-- 1 root root 2.4K Oct 12 2013 libuser.conf -rw-r--r-- 1 root root 19 Dec 15 2015 locale.conf lrwxrwxrwx 1 root root 25 Jul 9 2017 localtime -> ../usr/share/zoneinfo/UTC -rw-r--r-- 1 root root 2.0K Nov 4 2016 login.defs -rw-r--r-- 1 root root 675 Jul 9 2017 logrotate.conf drwxr-xr-x 2 root root 4.0K Apr 12 05:19 logrotate.d drwxr-xr-x 4 root root 4.0K Nov 5 2016 logwatch drwxr-xr-x 6 root root 4.0K Sep 22 2017 lvm -r--r--r-- 1 root root 33 May 31 2016 machine-id -rw-r--r-- 1 root root 111 Nov 5 2016 magic -rw-r--r-- 1 root root 272 May 14 2013 mailcap -rw-r--r-- 1 root root 2.0K Aug 3 2017 mail.rc -rw-r--r-- 1 root root 5.1K Aug 7 2017 makedumpfile.conf.sample -rw-r--r-- 1 root root 5.1K Jun 9 2014 man_db.conf drwxr-xr-x 2 root root 4.0K Nov 20 2015 maven -rw-r--r-- 1 root root 287 May 31 2016 mdadm.conf -rw-r--r-- 1 root root 51K May 14 2013 mime.types -rw-r--r-- 1 root root 936 Aug 2 2017 mke2fs.conf drwxr-xr-x 2 root root 4.0K Sep 22 2017 modprobe.d drwxr-xr-x 2 root root 4.0K Sep 6 2017 modules-load.d -rw-r--r-- 1 root root 0 Jun 7 2013 motd lrwxrwxrwx 1 root root 17 Dec 15 2015 mtab -> /proc/self/mounts drwxr-xr-x 8 root root 4.0K Mar 3 08:41 munin -rw-r--r-- 1 root root 630 Aug 11 2017 my.cnf drwxr-xr-x 2 root root 4.0K Sep 22 2017 my.cnf.d -rw-r--r-- 1 root root 8.7K Jun 10 2014 nanorc drwxr-xr-x 3 root root 4.0K May 3 2017 NetworkManager -rw-r--r-- 1 root root 58 Aug 3 2017 networks drwxr-xr-x 4 root root 4.0K Jun 18 15:24 nginx -rw-r--r-- 1 root root 1.7K Jul 9 2017 nsswitch.conf -rw-r--r-- 1 root root 1.7K May 31 2016 nsswitch.conf.bak -rw-r--r-- 1 root root 1.7K Aug 1 2017 nsswitch.conf.rpmnew drwxr-xr-x 3 root root 4.0K Jul 9 2017 ntp -rw-r--r-- 1 root root 2.2K May 31 2016 ntp.conf drwxr-xr-x 3 root root 4.0K Sep 22 2017 openldap drwxr-xr-x 2 root root 4.0K Nov 5 2016 opt -rw-r--r-- 1 root root 393 Aug 30 2017 os-release -rw------- 1 root root 89 Jul 10 2017 ossec-init.conf drwxr-xr-x 2 root root 4.0K Sep 22 2017 pam.d -rw-r--r-- 1 root root 2.3K Jul 27 22:47 passwd -rw-r--r-- 1 root root 2.2K Mar 2 21:44 passwd- drwxr-xr-x 2 root root 4.0K Jun 11 2017 pear -rw-r--r-- 1 root root 1.1K Sep 22 2017 pear.conf drwxr-xr-x 2 root root 4.0K Mar 16 19:38 php.d -rw-r--r-- 1 root root 65K Mar 16 20:09 php.ini -rw-r--r-- 1 root root 64K Jul 11 2017 php.ini.20170711.bak -rw-r--r-- 1 root root 64K Jul 18 2017 php.ini.20170718.bak -rw-r--r-- 1 root root 65K Aug 2 2017 php.ini.20170802.hardened -rw-r--r-- 1 root root 64K Sep 17 2016 php.ini.rpmnew drwxr-xr-x 2 root root 4.0K Mar 16 17:10 php-zts.d drwxr-xr-x 3 root root 4.0K Aug 4 2017 pkcs11 drwxr-xr-x 11 root root 4.0K Sep 22 2017 pki drwxr-xr-x 2 root root 4.0K Sep 22 2017 plymouth drwxr-xr-x 5 root root 4.0K Nov 5 2016 pm drwxr-xr-x 5 root root 4.0K Jul 9 2017 polkit-1 drwxr-xr-x 2 root root 4.0K Jun 10 2014 popt.d drwxr-xr-x 2 root root 4.0K Jul 10 22:17 postfix drwxr-xr-x 3 root root 4.0K Sep 22 2017 ppp drwxr-xr-x 2 root root 4.0K Sep 22 2017 prelink.conf.d -rw-r--r-- 1 root root 233 Jun 7 2013 printcap -rw-r--r-- 1 root root 1.8K Nov 5 2016 profile drwxr-xr-x 2 root root 4.0K May 24 23:49 profile.d -rw-r--r-- 1 root root 6.4K Jun 7 2013 protocols drwxr-xr-x 2 root root 4.0K Sep 22 2017 python lrwxrwxrwx 1 root root 10 Sep 22 2017 rc0.d -> rc.d/rc0.d lrwxrwxrwx 1 root root 10 Sep 22 2017 rc1.d -> rc.d/rc1.d lrwxrwxrwx 1 root root 10 Sep 22 2017 rc2.d -> rc.d/rc2.d lrwxrwxrwx 1 root root 10 Sep 22 2017 rc3.d -> rc.d/rc3.d lrwxrwxrwx 1 root root 10 Sep 22 2017 rc4.d -> rc.d/rc4.d lrwxrwxrwx 1 root root 10 Sep 22 2017 rc5.d -> rc.d/rc5.d lrwxrwxrwx 1 root root 10 Sep 22 2017 rc6.d -> rc.d/rc6.d drwxr-xr-x 10 root root 4.0K Aug 4 2017 rc.d lrwxrwxrwx 1 root root 13 Sep 22 2017 rc.local -> rc.d/rc.local lrwxrwxrwx 1 root root 14 Sep 22 2017 redhat-release -> centos-release -rw-r--r-- 1 root root 245 May 31 2016 resolv.conf -rw-r--r-- 1 root root 1.6K Dec 24 2012 rpc drwxr-xr-x 2 root root 4.0K Oct 3 2017 rpm -rw-r--r-- 1 root root 458 Aug 2 2017 rsyncd.conf -rw-r--r-- 1 root root 3.3K Jul 13 2017 rsyslog.conf drwxr-xr-x 2 root root 4.0K Aug 6 2017 rsyslog.d -rw-r--r-- 1 root root 966 Aug 3 2017 rwtab drwxr-xr-x 2 root root 4.0K Aug 3 2017 rwtab.d drwxr-xr-x 2 root root 4.0K Aug 2 2017 sasl2 -rw-r--r-- 1 root root 6.6K Feb 16 2016 screenrc -rw------- 1 root root 221 Nov 5 2016 securetty drwxr-xr-x 6 root root 4.0K Jul 9 2017 security drwxr-xr-x 5 root root 4.0K Sep 22 2017 selinux -rw-r--r-- 1 root root 655K Jun 7 2013 services -rw-r--r-- 1 root root 216 Aug 4 2017 sestatus.conf ---------- 1 root root 2.1K Jul 27 22:47 shadow ---------- 1 root root 2.0K Jun 15 09:12 shadow- ---------- 1 root root 1.5K Jul 9 2017 shadow.20170709.bak -rw-r--r-- 1 root root 76 Jun 7 2013 shells drwxr-xr-x 2 root root 4.0K Sep 22 2017 skel drwxr-xr-x 2 root root 4.0K Mar 2 15:58 snmp drwxr-xr-x 3 root root 4.0K Jan 10 2018 ssh drwxr-xr-x 2 root root 4.0K Sep 22 2017 ssl -rw-r--r-- 1 root root 212 Aug 3 2017 statetab drwxr-xr-x 2 root root 4.0K Aug 3 2017 statetab.d -rw-r--r-- 1 root root 0 Nov 5 2016 subgid -rw-r--r-- 1 root root 0 Nov 5 2016 subuid drwxr-xr-x 2 root root 4.0K Aug 23 2017 subversion -rw-r----- 1 root root 1.8K Sep 6 2017 sudo.conf -r--r----- 1 root root 4.6K Mar 29 2017 sudoers drwxr-x--- 2 root root 4.0K Sep 6 2017 sudoers.d -r--r----- 1 root root 3.9K Sep 6 2017 sudoers.rpmnew -rw-r----- 1 root root 3.2K Sep 6 2017 sudo-ldap.conf drwxr-xr-x 7 root root 4.0K Mar 2 21:44 sysconfig -rw-r--r-- 1 root root 449 Aug 3 2017 sysctl.conf drwxr-xr-x 2 root root 4.0K Sep 22 2017 sysctl.d drwxr-xr-x 4 root root 4.0K Sep 22 2017 systemd lrwxrwxrwx 1 root root 14 Sep 22 2017 system-release -> centos-release -rw-r--r-- 1 root root 23 Aug 30 2017 system-release-cpe -rw------- 1 59 59 6.9K Aug 3 2017 tcsd.conf drwxr-xr-x 2 root root 4.0K Sep 6 2017 terminfo drwxr-xr-x 2 root root 4.0K Sep 6 2017 tmpfiles.d -rw-r--r-- 1 root root 199 Oct 6 2014 trickled.conf -rw-r--r-- 1 root root 750 Aug 23 2017 trusted-key.key drwxr-xr-x 2 root root 4.0K Sep 22 2017 tuned drwxr-xr-x 3 root root 4.0K Oct 9 2017 udev drwxr-xr-x 5 root root 4.0K Apr 12 16:15 varnish drwxr-xr-x 5 root root 4.0K Nov 19 2017 varnish.20171119.bak -rw-r--r-- 1 root root 37 Dec 15 2015 vconsole.conf -rw-r--r-- 1 root root 2.0K Aug 1 2017 vimrc -rw-r--r-- 1 root root 2.0K Aug 1 2017 virc drwxr-xr-x 2 root root 4.0K Sep 22 2017 vsftpd drwxr-xr-x 118 root root 4.0K Jul 9 2017 webmin -rw-r--r-- 1 root root 4.4K Aug 3 2017 wgetrc -rw-r--r-- 1 root root 382 Mar 29 2013 whois.conf drwxr-xr-x 5 root root 4.0K Nov 5 2016 X11 drwxr-xr-x 4 root root 4.0K Nov 5 2016 xdg drwxr-xr-x 2 root root 4.0K Nov 5 2016 xinetd.d drwxr-xr-x 6 root root 4.0K Sep 22 2017 yum -rw-r--r-- 1 root root 970 Aug 5 2017 yum.conf drwxr-xr-x 2 root root 4.0K Oct 27 2017 yum.repos.d root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# cat etc/hostname hetzner2.opensourceecology.org root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc#
- there's quite a lot of nested dirs & tarballs, but it's all there, and it's all very intuitive to recover--much simpler than duplicity or glacier!
- I added some logic to the script to add a prefix to the archive file based on the current day.
- if the backup was done on January first, the prefix will be "yearly_"
- if it's the 1st of any month, it will be "monthly_"
- if it's the 1st day of the week (Monday), it will be "weekly_"
- otherwise it's "daily"
- now we can create lifecycle rules in backblaze, greatly simplifying management and reducing costs.
- I'll create no lifecycle rules for "yearly_*" files
- I'll keep "daily_*" files for 3 days
- I"ll keep "monthly_*" files for 1 year = 265 days
- I'll keep "weekly_" files for 1 month = 31 days
- I created the above lifecycle settings in the wui. It was easy, after logging in, just click on "buckets" on the left navigation panel, then click the "Lifecycle Settings" link on the corresponding bucket (ie: ose-server-backups). Then there's an easily dialog that opens. Click the "Use custom lifecycle rules" option, and "Add Lifecycle Rules", which will add another row to the list of rules. Each rule has 3 fields: "File Path (fileNamePrefix)", "Days Till Hide (daysFromUploadingToHiding)", and "Days Till Delete (daysFromHidingToDeleting)". I set both fields to the numbers as described above. I'm not sure if this means that daily files will be deleted in 3 or 6 days (the sum of the two?). I'll just check back later and see. Note that it wouldn't let me set either of these fields to "0".
- I updated the existing backup.sh script to call my b2 poc backup2.sh script at the end of it's run rather than having a separate cron job.
- I sent an email to Marcin asking him to add our billing information to our b2 account
# ...
- meanwhile, I confirmed that the inventory job queried to our aws glacier account finished
[root@hetzner2 ~]# aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id 'TtDuZgF5z0CTF43G3rJL1lDVufM_xvXEY0RfxntrS5t-JSHrV88r4aigmMJRv5OUAj0UbQaX-8Bk5eiBL_A3e0zMzs-t' output.json { "status": 200, "acceptRanges": "bytes", "contentType": "application/json" } [root@hetzner2 ~]#
- I opened the json, and it showed not changes from before! Indeed, checking the bash script, it looks like I had commented-out the actual upload-to-glacier lines, probably because my testing for osesurv kept adding redundant copies.
- I modified the uploadToGlacier.sh script to actually upload to glacier (!) and re-ran it.
Fri Jul 27, 2018
- emails
- the glacier.py script that I exeucted a couple days ago exited with an error when attepmpting to execute a query (to what appears to be a local db?)
[root@hetzner2 sync]# date Wed Jul 25 22:31:02 UTC 2018 [root@hetzner2 sync]# glacier.py --region us-west-2 archive list deleteMeIn2020 [root@hetzner2 sync]# glacier.py --region us-west-2 vault --wait sync deleteMeIn2020 usage: glacier.py [-h] [--region REGION] {vault,archive,job} ... glacier.py: error: unrecognized arguments: --wait [root@hetzner2 sync]# glacier.py --region us-west-2 vault sync --wait deleteMeIn2020 Traceback (most recent call last): File "/root/bin/glacier.py", line 736, in <module> main() File "/root/bin/glacier.py", line 732, in main App().main() File "/root/bin/glacier.py", line 718, in main self.args.func() File "/root/bin/glacier.py", line 471, in vault_sync wait=self.args.wait) File "/root/bin/glacier.py", line 462, in _vault_sync self._vault_sync_reconcile(vault, job, fix=fix) File "/root/bin/glacier.py", line 437, in _vault_sync_reconcile fix=fix) File "/root/bin/glacier.py", line 259, in mark_seen_upstream key=self.key, vault=vault, id=id).one() File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2395, in one ret = list(self) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2437, in iter self.session._autoflush() File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1208, in _autoflush util.raise_from_cause(e) File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1198, in _autoflush self.flush() File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in flush self._flush(objects) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush transaction.rollback(_capture_exception=True) File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in exit compat.reraise(exc_type, exc_value, exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in _flush flush_context.execute() File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in execute rec.execute(self) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 526, in execute uow File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 65, in save_obj mapper, table, insert) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 570, in _emit_insert_statements execute(statement, multiparams) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 322, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement compiled_sql, distilled_params File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1159, in _handle_dbapi_exception exc_info File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.IntegrityError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (IntegrityError) column id is not unique u'INSERT INTO archive (id, name, vault, "key", last_seen_upstream, created_here, deleted_here) VALUES (?, ?, ?, ?, ?, ?, ?)' (u'qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA', u'this is a metadata file showing the file and dir list contents of the archive of the same name', u'deleteMeIn2020', 'AKIAIWVTQ5TWWAGY5XBA', 1532298685, 1532571707.728233, None) [root@hetzner2 sync]# timed out waiting for input: auto-logout
- unfortunately, I don't see the job id in that output. I could initiate a new one with `aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}'`
- but, there does appear to be a db (from the above data), so maybe I could just steal the job id from glacier.py's db? I found the db in /root/.cache/glacier-cli/db
[root@hetzner2 ~]# ls -lah /root/.cache/glacier-cli/db -rw-r--r-- 1 root root 31K Jul 17 00:45 /root/.cache/glacier-cli/db [root@hetzner2 ~]# sqlite3 /root/.cache/glacier-cli/db SQLite version 3.7.17 2013-05-20 00:56:22 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> .tables archive sqlite> .schema CREATE TABLE archive ( id VARCHAR NOT NULL, name VARCHAR, vault VARCHAR NOT NULL, "key" VARCHAR NOT NULL, last_seen_upstream INTEGER, created_here INTEGER, deleted_here INTEGER, PRIMARY KEY (id) ); sqlite>
- so the db's archive list _does_ show the first round of backups that I pushed to glacier. This is the same file list that glacier.py refused to show as of recently
sqlite> select name from archive; this is a metadata file showing the file and dir list contents of the archive of the same name this is a metadata file showing the file and dir list contents of the archive of the same name this is a metadata file showing the file and dir list contents of the archive of the same name this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates hetzner1_20171001-052001.fileList.txt.bz2.gpg hetzner1_20171001-052001.tar.gpg hetzner1_20171101-062001.fileList.txt.bz2.gpg hetzner1_20171101-062001.tar.gpg hetzner1_20171201-062001.fileList.txt.bz2.gpg hetzner2_20170702-052001.fileList.txt.bz2.gpg hetzner2_20170702-052001.tar.gpg hetzner2_20170801-072001.fileList.txt.bz2.gpg hetzner2_20170801-072001.tar.gpg hetzner2_20170901-072001.fileList.txt.bz2.gpg hetzner2_20170901-072001.tar.gpg hetzner2_20171001-072001.fileList.txt.bz2.gpg hetzner2_20171001-072001.tar.gpg hetzner2_20171101-072001.fileList.txt.bz2.gpg hetzner2_20171101-072001.tar.gpg hetzner2_20171202-072001.fileList.txt.bz2.gpg hetzner2_20171202-072001.tar.gpg hetzner2_20180102-072001.fileList.txt.bz2.gpg hetzner2_20180102-072001.tar.gpg hetzner2_20180202-072001.fileList.txt.bz2.gpg hetzner2_20180302-072001.fileList.txt.bz2.gpg hetzner2_20180401-072001.fileList.txt.bz2.gpg hetzner2_20180401-072001.tar.gpg hetzner1_20170701-052001.fileList.txt.bz2.gpg hetzner1_20170801-052001.fileList.txt.bz2.gpg hetzner1_20180101-062001.fileList.txt.bz2.gpg hetzner1_20180201-062001.fileList.txt.bz2.gpg hetzner1_20180201-062001.tar.gpg hetzner1_20180301-062002.fileList.txt.bz2.gpg hetzner1_20180301-062002.tar.gpg hetzner1_20180401-052001.fileList.txt.bz2.gpg hetzner1_20180401-052001.tar.gpg hetzner1_20170701-052001.tar.gpg hetzner1_20171201-062001.fileList.txt.bz2.gpg hetzner1_20171201-062001.tar.gpg hetzner2_20180202-072001.tar.gpg hetzner2_20180302-072001.tar.gpg hetzner1_20170801-052001.tar.gpg hetzner1_20180101-062001.tar.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg sqlite>
- unfortunately, there doesn't appear to be anything here listing the job ids. I'll just do this with the `aws` command, then query for the output after _another_ day of waiting. I'm really getting sick of glacier.
[root@hetzner2 ~]# aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}' { "location": "/099400651767/vaults/deleteMeIn2020/jobs/TtDuZgF5z0CTF43G3rJL1lDVufM_xvXEY0RfxntrS5t-JSHrV88r4aigmMJRv5OUAj0UbQaX-8Bk5eiBL_A3e0zMzs-t", "jobId": "TtDuZgF5z0CTF43G3rJL1lDVufM_xvXEY0RfxntrS5t-JSHrV88r4aigmMJRv5OUAj0UbQaX-8Bk5eiBL_A3e0zMzs-t" } [root@hetzner2 ~]#
- ...
- meanwhile, I continue my poc to deprecate aws glacier with the cheaper & easier backblaze b2
- I attempted to do an `ls` of our 'ose-server-backups' bucket that I uploaded a file to last week, but I didn't get any results from the cli
[root@hetzner2 restore.2018-07]# duplicity list-current-files b2://${b2_accountid}:${b2_prikey}@ose-server-backups/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Thu Jul 19 21:14:17 2018 Tue Jul 17 00:51:32 2018 . [root@hetzner2 restore.2018-07]#
- I tried logging into the wui, and I saw that it had 3 files:
- duplicity-full-signatures.20180719T211417Z.sigtar.gpg 403.0 bytes 07/19/2018 17:14
- duplicity-full.20180719T211417Z.manifest.gpg 266.0 bytes 07/19/2018 17:14
- duplicity-full.20180719T211417Z.vol1.difftar.gpg 7.9 KB 07/19/2018 17:14
- I got some intersting output from the 'collection-status' argument
[root@hetzner2 restore.2018-07]# duplicity collection-status b2://${b2_accountid}:${b2_prikey}@ose-server-backups/ Last full backup date: Thu Jul 19 21:14:17 2018 Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /root/.cache/duplicity/c034c9db7d546e6993579391627690db Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Thu Jul 19 21:14:17 2018 Chain end time: Thu Jul 19 21:14:17 2018 Number of contained backup sets: 1 Total number of contained volumes: 1 Type of backup set: Time: Num volumes: Full Thu Jul 19 21:14:17 2018 1 ------------------------- No orphaned or incomplete backup sets found. [root@hetzner2 restore.2018-07]#
- ugh, looking through my old logs, I discovered that I disclosed the existing b2 application id & key. So I changed it.
- I did some googling, but I couldn't find a way to list the file that I already uploaded with duplicity. So I tried a restore. For some reason it turned the dir where I wanted the file to be dropped into a file that's encrypted
[root@hetzner2 backblaze]# date Sat Jul 28 01:49:41 UTC 2018 [root@hetzner2 backblaze]# pwd /var/tmp/backblaze [root@hetzner2 backblaze]# ls -lah total 12K drwxr-xr-x 3 root root 4.0K Jul 28 01:48 . drwxrwxrwt. 54 root root 4.0K Jul 27 22:10 .. drwxr-xr-x 2 root root 4.0K Jul 28 01:48 restore.2018-07 [root@hetzner2 backblaze]# ls -lah restore.2018-07/ total 8.0K drwxr-xr-x 2 root root 4.0K Jul 28 01:48 . drwxr-xr-x 3 root root 4.0K Jul 28 01:48 .. [root@hetzner2 backblaze]# duplicity restore b2://${b2_accountid}:${b2_prikey}@ose-server-backups/ /var/tmp/backblaze/restore.2018-07/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Thu Jul 19 21:14:17 2018 [root@hetzner2 backblaze]# ls -lah total 16K drwxr-xr-x 2 root root 4.0K Jul 28 01:50 . drwxrwxrwt. 54 root root 4.0K Jul 27 22:10 .. -rw-r--r-- 1 root root 7.5K Jul 17 00:51 restore.2018-07 [root@hetzner2 backblaze]# ls -lah restore.2018-07 -rw-r--r-- 1 root root 7.5K Jul 17 00:51 restore.2018-07 [root@hetzner2 backblaze]# [root@hetzner2 backblaze]# cat restore.2018-07 | gpg --list-packets :symkey enc packet: version 4, cipher 7, s2k 3, hash 2 salt 016b52b5adff8d98, count 20971520 (228) gpg: AES encrypted data gpg: cancelled by user :encrypted data packet: length: 7624 mdc_method: 2 gpg: encrypted with 1 passphrase gpg: decryption failed: No secret key [root@hetzner2 backblaze]#
- because I don't trust duplicity yet, this file that I uploaded was actually encrypted. So duplcity should have encrypted our encrypted file. Trouble is--I don't know if duplicity already decrypted it. Actually, it's the same key for both, so I was able to decrypt it at least once
[root@hetzner2 backblaze]# gpg -o decrypted.txt --batch --passphrase-file /root/backups/ose-backups-cron.key --decrypt restore.2018-07 gpg: AES encrypted data gpg: encrypted with 1 passphrase [root@hetzner2 backblaze]#
- ok, so that actually worked.
[root@hetzner2 backblaze]# bzcat restore.2018-07 | head -n 5 bzcat: restore.2018-07 is not a bzip2 file. [root@hetzner2 backblaze]# bzcat decrypted.txt | head -n 5 ================================================================================ This file is metadata for the archive 'hetzner1_final_backup_before_hetzner1_deprecation_osesurv'. In it, we list all the files included in the compressed/encrypted archive (produced using 'ls -lahR /var/tmp/deprecateHetzner1/osesurv'), including the files within the tarballs within the archive (produced using 'find /var/tmp/deprecateHetzner1/osesurv -type f -exec tar -tvf '{}' \; ') This archive was made as a backup of the files and databases that were previously used on hetnzer1 prior to migrating to hetzner2 in 2018. Before we cancelled our contract for hetzner1, this backup was made & put on glacier for long-term storage in-case we learned later that we missed some content on the migration. Form more information, please see the following link: * https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation [root@hetzner2 backblaze]#
- But what happens when I have uploaded more than just 1 file? Let me upload some more complex dir/file structures, then attempt to restore both.
[root@hetzner2 backblaze]# mkdir sync [root@hetzner2 backblaze]# ls decrypted.txt restore.2018-07 sync [root@hetzner2 backblaze]# cd sync [root@hetzner2 sync]# mkdir one [root@hetzner2 sync]# mkdir two [root@hetzner2 sync]# mkdir two/three [root@hetzner2 sync]# touch headFile.txt [root@hetzner2 sync]# echo "file contents $RANDOM" > headFile.txt [root@hetzner2 sync]# cat headFile.txt file contents 5787 [root@hetzner2 sync]# echo "file contents $RANDOM" > one/oneFile.txt [root@hetzner2 sync]# echo "file contents $RANDOM" > two/twoFile.txt [root@hetzner2 sync]# echo "file contents $RANDOM" > two/three/threeFile.txt [root@hetzner2 sync]# ls -lahR .: total 20K drwxr-xr-x 4 root root 4.0K Jul 28 01:56 . drwxr-xr-x 3 root root 4.0K Jul 28 01:55 .. -rw-r--r-- 1 root root 19 Jul 28 01:56 headFile.txt drwxr-xr-x 2 root root 4.0K Jul 28 01:57 one drwxr-xr-x 3 root root 4.0K Jul 28 01:57 two ./one: total 12K drwxr-xr-x 2 root root 4.0K Jul 28 01:57 . drwxr-xr-x 4 root root 4.0K Jul 28 01:56 .. -rw-r--r-- 1 root root 20 Jul 28 01:57 oneFile.txt ./two: total 16K drwxr-xr-x 3 root root 4.0K Jul 28 01:57 . drwxr-xr-x 4 root root 4.0K Jul 28 01:56 .. drwxr-xr-x 2 root root 4.0K Jul 28 01:57 three -rw-r--r-- 1 root root 20 Jul 28 01:57 twoFile.txt ./two/three: total 12K drwxr-xr-x 2 root root 4.0K Jul 28 01:57 . drwxr-xr-x 3 root root 4.0K Jul 28 01:57 .. -rw-r--r-- 1 root root 18 Jul 28 01:57 threeFile.txt [root@hetzner2 sync]# grep -ir 'file' * headFile.txt:file contents 5787 one/oneFile.txt:file contents 12508 two/three/threeFile.txt:file contents 776 two/twoFile.txt:file contents 25544 [root@hetzner2 sync]#
- now when I try to back that up, it complains that I changed the source. So I guess this makes sense. Duplicty is thinking different than I am. I want just a bucket full of files that I drop, but duplicity wants a directory that it can keep in sync. The question is: what is better for OSE? I think simpler is better. duplicity is nice, but I'm not sure the complexity is worth it. I'd rather have a bunch of gpg files just sitting there, accessible from the backblaze wui, downloadable, and decryptable following our documentation. Duplicity would also be better in terms of deltas, but what's it worth if nobody can recover our backups? Restoreability is the most important feature of a backup. A backup is not a backup if you can't restore from it!
[root@hetzner2 sync]# duplicity /var/tmp/backblaze/sync/ b2://${b2_accountid}:${b2_prikey}@ose-server-backups/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Thu Jul 19 21:14:17 2018 Fatal Error: Backup source directory has changed. Current directory: /var/tmp/backblaze/sync Previous directory: /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg
Aborting because you may have accidentally tried to backup two different data sets to the same remote location, or using the same archive directory. If this is not a mistake, use the --allow-source-mismatch switch to avoid seeing this message [root@hetzner2 sync]#
- alternatively, let's checkout how easy it is to list our existing files with the `b2` cli tool. much easier!
[maltfield@hetzner2 ~]$ b2 authorize-account ${b2_accountid} ${b2_prikey} Using https://api.backblazeb2.com [maltfield@hetzner2 ~]$ b2 list-buckets 5605817c251dadb96e4d0118 allPrivate ose-server-backups [maltfield@hetzner2 ~]$ b2 ls ose-server-backups duplicity-full-signatures.20180719T211417Z.sigtar.gpg duplicity-full.20180719T211417Z.manifest.gpg duplicity-full.20180719T211417Z.vol1.difftar.gpg [maltfield@hetzner2 ~]$
- and the upload (sync) works well too, but the 'keepDays' deleted (well, "hid" = probably the non-destructive pre-step prior to deletion) the duplicity files. I thought it only applied to the files I was uploading *now*, but apparently not. Note we can still create lifecycle rules in b2 that matches by filename prefix. So, for example, we can have "daily_*" files deleted after 3 days and "monthly_*" files deleted after 2 years (or never).
[maltfield@hetzner2 backblaze]$ date Sat Jul 28 02:12:32 UTC 2018 [maltfield@hetzner2 backblaze]$ pwd /var/tmp/backblaze [maltfield@hetzner2 backblaze]$ ls -lah total 16K drwxr-xr-x 4 root root 4.0K Jul 28 02:11 . drwxrwxrwt. 54 root root 4.0K Jul 27 22:10 .. drwxr-xr-x 2 maltfield root 4.0K Jul 28 02:11 restore.2018-07 drwxr-xr-x 4 root root 4.0K Jul 28 01:56 sync [maltfield@hetzner2 backblaze]$ b2 sync --keepDays 1 sync/ b2://ose-server-backups/ hide duplicity-full.20180719T211417Z.manifest.gpg hide duplicity-full-signatures.20180719T211417Z.sigtar.gpg upload two/twoFile.txt upload headFile.txt hide duplicity-full.20180719T211417Z.vol1.difftar.gpg upload two/three/threeFile.txt upload one/oneFile.txt [maltfield@hetzner2 backblaze]$ b2 ls ose-server-backups headFile.txt one/ two/ [maltfield@hetzner2 backblaze]$
- the restore worked too, though it crashed very ungracefully when I failed to enter the destination file name (assuming it would auto-fill if I gave it the dest dir)
[maltfield@hetzner2 backblaze]$ ls -lah restore.2018-07/ total 8.0K drwxr-xr-x 2 maltfield root 4.0K Jul 28 02:11 . drwxr-xr-x 4 root root 4.0K Jul 28 02:11 .. [maltfield@hetzner2 backblaze]$ b2 download-file-by-name ose-server-backups two/three/threeFile.txt restore.2018-07/ Traceback (most recent call last): File "/usr/bin/b2", line 11, in <module> load_entry_point('b2==1.2.1', 'console_scripts', 'b2')() File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 1257, in main File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 1138, in run_command File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 398, in run File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 166, in download_file_by_name File "build/bdist.linux-x86_64/egg/b2/session.py", line 38, in wrapper File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 210, in download_file_by_name File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 267, in _download_file_from_url File "/usr/lib64/python2.7/contextlib.py", line 17, in enter return self.gen.next() File "build/bdist.linux-x86_64/egg/b2/download_dest.py", line 186, in write_file_and_report_progress_context File "/usr/lib64/python2.7/contextlib.py", line 17, in enter return self.gen.next() File "build/bdist.linux-x86_64/egg/b2/download_dest.py", line 92, in write_to_local_file_context IOError: [Errno 21] Is a directory: u'restore.2018-07/' [maltfield@hetzner2 backblaze]$ b2 download-file-by-name ose-server-backups two/three/threeFile.txt restore.2018-07/threeFile.txt restore.2018-07/threeFile.txt: 100%|| 18.0/18.0 [00:00<00:00, 18.2kB/s] File name: two/three/threeFile.txt File id: 4_z5605817c251dadb96e4d0118_f10855cfa9b477620_d20180728_m021554_c001_v0001011_t0039 File size: 18 Content type: text/plain Content sha1: d5c679a00e36769233a138252c2193ccb4ac5ffb INFO src_last_modified_millis: 1532743040929 checksum matches [maltfield@hetzner2 backblaze]$ ls -lah restore.2018-07/ total 12K drwxr-xr-x 2 maltfield root 4.0K Jul 28 02:22 . drwxr-xr-x 4 root root 4.0K Jul 28 02:11 .. -rw-rw-r-- 1 maltfield maltfield 18 Jul 28 01:57 threeFile.txt [maltfield@hetzner2 backblaze]$ cat restore.2018-07/threeFile.txt file contents 776 [maltfield@hetzner2 backblaze]$
- the sync downward spat out a ton of errors, but I think it worked (we don't really need this functionality anyway)
[maltfield@hetzner2 backblaze]$ ls -lah restore.2018-07/ total 8.0K drwxr-xr-x 2 maltfield root 4.0K Jul 28 02:24 . drwxr-xr-x 4 root root 4.0K Jul 28 02:11 .. [maltfield@hetzner2 backblaze]$ [maltfield@hetzner2 backblaze]$ [maltfield@hetzner2 backblaze]$ b2 sync b2://ose-server-backups/ restore.2018-07/ ERROR:b2.sync.action:an exception occurred in a sync action Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run self.do_action(bucket, reporter) File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 143, in do_action bucket.download_file_by_name(self.b2_file_name, download_dest, SyncFileReporter(reporter)) File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 166, in download_file_by_name range_=range_, File "build/bdist.linux-x86_64/egg/b2/session.py", line 38, in wrapper return f(api_url, account_auth_token, *args, **kwargs) File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 210, in download_file_by_name url, account_auth_token_or_none, download_dest, range_=range_ File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 236, in _download_file_from_url with self.b2_http.get_content(url, request_headers) as response: File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 337, in get_content response = _translate_and_retry(do_get, try_count, None) File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 119, in _translate_and_retry return _translate_errors(fcn, post_params) File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 55, in _translate_errors int(error['status']), error['code'], error['message'], post_params FileNotPresent: File not present: b2_download(duplicity-full-signatures.20180719T211417Z.sigtar.gpg, 4_z5605817c251dadb96e4d0118_f1198b00a71235379_d20180728_m021554_c001_v0001101_t0032, /var/tmp/backblaze/restore.2018-07/duplicity-full-signatures.20180719T211417Z.sigtar.gpg, 1532744154000): FileNotPresent() File not present: ERROR:b2.sync.action:an exception occurred in a sync action Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run self.do_action(bucket, reporter) File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 143, in do_action bucket.download_file_by_name(self.b2_file_name, download_dest, SyncFileReporter(reporter)) File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 166, in download_file_by_name range_=range_, File "build/bdist.linux-x86_64/egg/b2/session.py", line 38, in wrapper return f(api_url, account_auth_token, *args, **kwargs) File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 210, in download_file_by_name url, account_auth_token_or_none, download_dest, range_=range_ File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 236, in _download_file_from_url with self.b2_http.get_content(url, request_headers) as response: File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 337, in get_content response = _translate_and_retry(do_get, try_count, None) File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 119, in _translate_and_retry return _translate_errors(fcn, post_params) File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 55, in _translate_errors int(error['status']), error['code'], error['message'], post_params FileNotPresent: File not present: b2_download(duplicity-full.20180719T211417Z.vol1.difftar.gpg, 4_z5605817c251dadb96e4d0118_f10121eebe2964534_d20180728_m021554_c001_v0001109_t0027, /var/tmp/backblaze/restore.2018-07/duplicity-full.20180719T211417Z.vol1.difftar.gpg, 1532744154000): FileNotPresent() File not present: dnload two/twoFile.txt dnload one/oneFile.txt ERROR:b2.sync.action:an exception occurred in a sync action/s Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run self.do_action(bucket, reporter) File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 143, in do_action bucket.download_file_by_name(self.b2_file_name, download_dest, SyncFileReporter(reporter)) File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 166, in download_file_by_name range_=range_, File "build/bdist.linux-x86_64/egg/b2/session.py", line 38, in wrapper return f(api_url, account_auth_token, *args, **kwargs) File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 210, in download_file_by_name url, account_auth_token_or_none, download_dest, range_=range_ File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 236, in _download_file_from_url with self.b2_http.get_content(url, request_headers) as response: File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 337, in get_content response = _translate_and_retry(do_get, try_count, None) File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 119, in _translate_and_retry return _translate_errors(fcn, post_params) File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 55, in _translate_errors int(error['status']), error['code'], error['message'], post_params FileNotPresent: File not present: b2_download(duplicity-full.20180719T211417Z.manifest.gpg, 4_z5605817c251dadb96e4d0118_f10155985b34af486_d20180728_m021553_c001_v0001033_t0048, /var/tmp/backblaze/restore.2018-07/duplicity-full.20180719T211417Z.manifest.gpg, 1532744153000): FileNotPresent() File not present: dnload headFile.txt dnload two/three/threeFile.txt ERROR:b2.console_tool:ConsoleTool command error Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 1138, in run_command return command.run(args) File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 921, in run allow_empty_source=allow_empty_source File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File "build/bdist.linux-x86_64/egg/b2/sync/sync.py", line 220, in sync_folders raise CommandError('sync is incomplete') CommandError: sync is incomplete ERROR: sync is incomplete [maltfield@hetzner2 backblaze]$ ls -lah restore.2018-07/ total 20K drwxr-xr-x 4 maltfield root 4.0K Jul 28 02:26 . drwxr-xr-x 4 root root 4.0K Jul 28 02:11 .. -rw-rw-r-- 1 maltfield maltfield 19 Jul 28 01:56 headFile.txt drwxrwxr-x 2 maltfield maltfield 4.0K Jul 28 02:26 one drwxrwxr-x 3 maltfield maltfield 4.0K Jul 28 02:26 two [maltfield@hetzner2 backblaze]$
- ok, I'm going to continue with this poc by including it in our daily backup scripts. With b2, we have only 10G of free space. Our daily encrypted backup file is 15G, so I'll just test this with the files in /etc/
[root@hetzner2 backups]# du -sh /etc 41M /etc [root@hetzner2 backups]# du -sh /home 119M /home [root@hetzner2 backups]# du -sh /var/log 288M /var/log
- I forked our backup.sh script to /root/backups/backup2.sh. I'll run both for some time (probably a few months to truly test the lifecycle rules). Once I've determined that it works ok, I'll have Marcin add our billing information to the backblaze account, then we can merge it into backup.sh and deprecate the use of dreamhost for our backups.
- I also prefer if the user executing this b2 cli (which clearly _is_ buggy) was running as an unprivileged user not root). So I created a new user called 'b2user' to run the `b2` binary.
[root@hetzner2 backblaze]# useradd b2user [root@hetzner2 backblaze]# su - b2user [b2user@hetzner2 ~]$ whoami b2user [b2user@hetzner2 ~]$ echo $HOME /home/b2user [b2user@hetzner2 ~]$ b2 ls ose-server-backups ERROR: Missing account data: 'NoneType' object has no attribute 'getitem' Use: b2 authorize-account [b2user@hetzner2 ~]$ export b2_prikey='<obfuscated>' [b2user@hetzner2 ~]$ export b2_accountid='<obfuscated>' [b2user@hetzner2 ~]$ b2 authorize-account ${b2_accountid} ${b2_prikey} Using https://api.backblazeb2.com [b2user@hetzner2 ~]$ b2 ls ose-server-backups headFile.txt one/ two/ [b2user@hetzner2 ~]$
- this new unprivliged 'b2user' user can also be used by root
[b2user@hetzner2 ~]$ logout [root@hetzner2 backblaze]# sudo -u b2user b2 ls ose-server-backups headFile.txt one/ two/ [root@hetzner2 backblaze]#
- but the user can't access our 'sync' dir in root, even if we make them the owner
[root@hetzner2 backblaze]# ls -lah /root/backups/sync total 16G drwxr-xr-x 2 root root 4.0K Jul 27 07:42 . drwx------ 5 root root 4.0K Jul 28 02:54 .. -rw-r--r-- 1 root root 16G Jul 27 07:42 hetzner2_20180727_072001.tar.gpg [root@hetzner2 backblaze]# chown b2user /root/backups/sync/hetzner2_20180727_072001.tar.gpg [root@hetzner2 backblaze]# ls -lah /root/backups/sync total 16G drwxr-xr-x 2 root root 4.0K Jul 27 07:42 . drwx------ 5 root root 4.0K Jul 28 02:54 .. -rw-r--r-- 1 b2user root 16G Jul 27 07:42 hetzner2_20180727_072001.tar.gpg [root@hetzner2 backblaze]# su - b2user Last login: Sat Jul 28 02:53:45 UTC 2018 on pts/104 [b2user@hetzner2 ~]$ ls -lah /root/backups/sync/hetzner2_20180727_072001.tar.gpg ls: cannot access /root/backups/sync/hetzner2_20180727_072001.tar.gpg: Permission denied
- symlinks won't help
[root@hetzner2 backblaze]# ln -s /root/backups/sync/hetzner2_20180727_072001.tar.gpg /home/b2user/ [root@hetzner2 backblaze]# chown b2user /home/b2user/hetzner2_20180727_072001.tar.gpg [root@hetzner2 backblaze]# chown -h b2user /home/b2user/hetzner2_20180727_072001.tar.gpg [root@hetzner2 backblaze]# ls -lah /home/b2user/ total 28K drwx------ 2 b2user b2user 4.0K Jul 28 02:56 . drwxr-xr-x. 13 root root 4.0K Jul 28 02:47 .. -rw------- 1 b2user b2user 4.0K Jul 28 02:49 .b2_account_info -rw------- 1 b2user b2user 462 Jul 28 02:59 .bash_history -rw-r--r-- 1 b2user b2user 18 Sep 6 2017 .bash_logout -rw-r--r-- 1 b2user b2user 193 Sep 6 2017 .bash_profile -rw-r--r-- 1 b2user b2user 231 Sep 6 2017 .bashrc lrwxrwxrwx 1 b2user root 51 Jul 28 02:56 hetzner2_20180727_072001.tar.gpg -> /root/backups/sync/hetzner2_20180727_072001.tar.gpg [root@hetzner2 backblaze]# su - b2user Last login: Sat Jul 28 02:54:43 UTC 2018 on pts/104 [b2user@hetzner2 ~]$ ls hetzner2_20180727_072001.tar.gpg [b2user@hetzner2 ~]$ ls -lah hetzner2_20180727_072001.tar.gpg lrwxrwxrwx 1 root root 51 Jul 28 02:56 hetzner2_20180727_072001.tar.gpg -> /root/backups/sync/hetzner2_20180727_072001.tar.gpg [b2user@hetzner2 ~]$ cat hetzner2_20180727_072001.tar.gpg cat: hetzner2_20180727_072001.tar.gpg: Permission denied [b2user@hetzner2 ~]$
- I'll have to address this later. I want the b2 user to be able to access this file, but nothing else in /root. And I want to preserve the logic that prevents backups from containing old backups. Probably the best option is to move the 'sync' dir into '/home/b2user/sync' but I'll only copy files there *after* they've been encrypted. That way the 'b2user' user cannot access any of our super-sensitive data (ie: config files with passwords in /etc contained within our backups).
Wed Jul 25, 2018
- Marcin forwarded me a message from Harmon about him not seeing the updates to the page https://www.opensourceecology.org/ose-fellowship/
- I tried loading the page, and it was stale as Harmon saw it. I logged-in, and it changed to what Marcin saw. This is an issue with varnish cache clearing. We use a varnish plugin to trigger cache purges when a page/post is updated, but apparently it doesn't work. I don't have spare cycles to debug, and it's really not a huge issue since this is a mostly-read, write-infrequently website. And the cache clears itself after 24 hours anyway.
- I checked-up on the glacier.sh script's status, which was triggered last week to encrypt & upload the hetzner1 backups to aws glacier
- the staging 'sync' dir appears to have created the files & fileList files we wanted; next step is to check if they exist in glacier
[root@hetzner2 sync]# date Wed Jul 25 22:21:59 UTC 2018 [root@hetzner2 sync]# pwd /var/tmp/deprecateHetzner1/sync [root@hetzner2 sync]# ls -lah total 11M drwxr-xr-x 2 root root 4.0K Jul 25 22:20 . drwx------ 9 root root 4.0K Jul 17 00:15 .. -rw-r--r-- 1 root root 810K Jul 19 20:40 hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2 -rw-r--r-- 1 root root 810K Jul 19 20:40 hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2.gpg -rw-r--r-- 1 root root 4.0M Jul 19 20:55 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2 -rw-r--r-- 1 root root 4.0M Jul 19 20:55 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2.gpg -rw-r--r-- 1 root root 100K Jul 19 20:50 hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2 -rw-r--r-- 1 root root 100K Jul 19 20:50 hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2.gpg -rw-r--r-- 1 root root 102K Jul 19 20:44 hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2 -rw-r--r-- 1 root root 102K Jul 19 20:44 hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2.gpg -rw-r--r-- 1 root root 186K Jul 19 20:49 hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2 -rw-r--r-- 1 root root 187K Jul 19 20:49 hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2.gpg -rw-r--r-- 1 root root 7.4K Jul 17 00:51 hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2 -rw-r--r-- 1 root root 7.5K Jul 17 00:51 hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg [root@hetzner2 sync]#
- here's a snippet of one of the metadata fileList files, which shows what the very large encrypted backup archive actually contains
[root@hetzner2 sync]# bzcat hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2 | head -n 50 ================================================================================ This file is metadata for the archive 'hetzner1_final_backup_before_hetzner1_deprecation_osemain'. In it, we list all the files included in the compressed/encrypted archive (produced using 'ls -lahR /var/tmp/deprecateHetzner1/osemain'), including the files within the tarballs within the archive (produced using 'find /var/tmp/deprecateHetzner1/osemain -type f -exec tar -tvf '{}' \; ') This archive was made as a backup of the files and databases that were previously used on hetnzer1 prior to migrating to hetzner2 in 2018. Before we cancelled our contract for hetzner1, this backup was made & put on glacier for long-term storage in-case we learned later that we missed some content on the migration. Form more information, please see the following link: * https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation - Michael Altfield <maltfield@opensourceecology.org> Note: this file was generated at 2018-07-19 20:44:36+00:00 ================================================================================ ############################# # 'ls -lahR' output follows # ############################# /var/tmp/deprecateHetzner1/osemain: total 3.3G drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 17:48 . drwx------ 9 root root 4.0K Jul 17 00:15 .. -rw-r--r-- 1 maltfield maltfield 2.9G Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_home.tar.bz2 -rw-r--r-- 1 maltfield maltfield 1.2M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-openswh.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 187K Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_fef.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 157K Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osesurv.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 14 Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_website.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 203M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osewiki.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 212M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_webroot.tar.bz2 ================================================================================ ############################ # tarball contents follows # ############################ drwxr-xr-x osemain/osemain 0 2017-06-21 21:27 usr/home/osemain/bin/ -rw-r--r-- osemain/osemain 429 2011-07-19 03:46 usr/home/osemain/bin/wiki-import.sh lrwxrwxrwx osemain/osemain 0 2011-12-26 01:01 usr/home/osemain/bin/java -> ../jdk-6/bin/java lrwxrwxrwx osemain/osemain 0 2017-06-21 21:27 usr/home/osemain/bin/cleanLocal.pl -> ../backups/cleanLocal.pl -rwxr-xr-x osemain/osemain 1248055 2016-01-13 01:07 usr/home/osemain/bin/composer.phar lrwxrwxrwx osemain/osemain 0 2017-06-21 14:21 usr/home/osemain/bin/backup.sh -> ../backups/backup.sh lrwxrwxrwx osemain/osemain 0 2011-11-25 11:00 usr/home/osemain/bin/mbkp -> /usr/home/osemain/mbkp/mbkp -rw-r--r-- osemain/osemain 75 2016-01-13 01:48 usr/home/osemain/composer.json -rw-r--r-- osemain/osemain 32967 2016-01-13 01:48 usr/home/osemain/composer.lock drwxr-xr-x osemain/osemain 0 2017-08-26 15:34 usr/home/osemain/cron/ -rw-r--r-- osemain/osemain 648 2011-07-19 13:28 usr/home/osemain/emails.txt drwxr-xr-x osemain/osemain 0 2016-01-13 01:48 usr/home/osemain/extensions/ drwxr-xr-x osemain/osemain 0 2016-01-13 01:48 usr/home/osemain/extensions/Validator/ -rw-r--r-- osemain/osemain 755 2014-06-25 21:40 usr/home/osemain/extensions/Validator/phpunit.xml.dist -rw-r--r-- osemain/osemain 62 2014-06-25 21:40 usr/home/osemain/extensions/Validator/.gitignore -rw-r--r-- osemain/osemain 467 2014-06-25 21:40 usr/home/osemain/extensions/Validator/.travis.yml -rw-r--r-- osemain/osemain 18575 2014-06-25 21:40 usr/home/osemain/extensions/Validator/COPYING drwxr-xr-x osemain/osemain 0 2014-06-25 21:40 usr/home/osemain/extensions/Validator/src/ drwxr-xr-x osemain/osemain 0 2014-06-25 21:40 usr/home/osemain/extensions/Validator/src/legacy/ -rw-r--r-- osemain/osemain 14953 2014-06-25 21:40 usr/home/osemain/extensions/Validator/src/legacy/ParserHook.php -rw-r--r-- osemain/osemain 222 2014-06-25 21:40 usr/home/osemain/extensions/Validator/src/legacy/README.md
[root@hetzner2 sync]# bzcat hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2 | tail -n 20 -rw-r--r-- osemain/osemain 702 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/jsduck/external.js -rw-r--r-- osemain/osemain 2613 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/jsduck/CustomTags.rb -rw-r--r-- osemain/osemain 1988 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/checkBadRedirects.php -rw-r--r-- osemain/osemain 3087 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/deleteOrphanedRevisions.php -rw-r--r-- osemain/osemain 6072 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/populateImageSha1.php drwxr-xr-x osemain/osemain 0 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/hiphop/ -rwxr-xr-x osemain/osemain 479 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/hiphop/run-server -rw-r--r-- osemain/osemain 482 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/hiphop/server.conf -rw-r--r-- osemain/osemain 3285 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/deleteOldRevisions.php -rw-r--r-- osemain/osemain 2002 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/patchSql.php -rw-r--r-- osemain/osemain 1901 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/dumpSisterSites.php -rw-r--r-- osemain/osemain 1531 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/tidyUpBug37714.php -rw-r--r-- osemain/osemain 10863 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/backup.inc -rw-r--r-- osemain/osemain 5018 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/cleanupUploadStash.php -rw-r--r-- osemain/osemain 3168 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/populateRecentChangesSource.php -rw-r--r-- osemain/osemain 1868 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/deleteArchivedFiles.php -rw-r--r-- osemain/osemain 2383 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/showSiteStats.php -rw-r--r-- osemain/osemain 526 2015-06-19 19:51 usr/www/users/osemain/old.html -rw-r--r-- osemain/osemain 883 2015-06-19 13:11 usr/www/users/osemain/oldu.html.done ================================================================================ [root@hetzner2 sync]#
- I kicked-off a sync; tomorrow I'll be able to print this job's results from the `aws` tool
[root@hetzner2 sync]# date Wed Jul 25 22:31:02 UTC 2018 [root@hetzner2 sync]# glacier.py --region us-west-2 archive list deleteMeIn2020 [root@hetzner2 sync]# glacier.py --region us-west-2 vault --wait sync deleteMeIn2020 usage: glacier.py [-h] [--region REGION] {vault,archive,job} ... glacier.py: error: unrecognized arguments: --wait [root@hetzner2 sync]# glacier.py --region us-west-2 vault sync --wait deleteMeIn2020
- ...
- Marcin hit a 500 issue when attempting to access a new user's resume at https://wiki.opensourceecology.org/index.php?title=Special:ConfirmAccounts/authors&file=5c659e12860088d40865bd5637dde1c8eeb61762.pdf :
- strange, I couldn't hit this. iptables logs (/var/log/kern.log) says it's dropping me, but `iptables-save` or `iptables -nL` doesn't show that I _should_ be being blocked
Mon Jul 23, 2018
- Chris got a new email & PGP key. I sent him Marcin & My PGP keys.
- Marcin mentioned that we need a new version of OSE Linux before the Boot Camp on Aug 25th. I asked Chris if he would have time (after his exams) to push a new version with the CNC Circuit Mill Toolchain + Persistence before then. He said he should be able to release it before Aug 17th.
Thr Jul 19, 2018
- interesting, osedev.org is down this morning
- hetzner shutdown our (hopefully now deprecated) hetzner1 server as requested. They said that they rebooted it into a "live-rescue" system so it could be re-enabled fast (the delay for them responding to email is like 12 hours, so I'm not sure what this buys us).
- I did an nmap scan & saw that ssh was still enabled, but using a different set of private keys. I sure hope they didn't do something stupid like leaving PermitRootLogin enabled with some shitty default password
user@ose:~$ nmap dedi978.your-server.de Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-19 15:16 EDT Nmap scan report for dedi978.your-server.de (78.46.3.178) Host is up (0.19s latency). Not shown: 942 closed ports, 56 filtered ports PORT STATE SERVICE 22/tcp open ssh 222/tcp open rsh-spx Nmap done: 1 IP address (1 host up) scanned in 22.70 seconds user@ose:~$
- I confirmed that all the old domains on hetzner1 are now inaccessible
- I updated our wiki article on the CHG to deprecate hetzner1 with the latest status updates https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation
- I went to check on the status of the inventory sync by glacier.py, and the job appears to still be pending!
[root@hetzner2 ~]# glacier.py --region us-west-2 job list i/d 2018-07-18T22:54:13.824Z deleteMeIn2020 [root@hetzner2 ~]# glacier.py --region us-west-2 vault list deleteMeIn2020 [root@hetzner2 ~]# glacier.py --region us-west-2 archive list deleteMeIn2020 [root@hetzner2 ~]# date Thu Jul 19 20:04:51 UTC 2018 [root@hetzner2 ~]#
- I don't know why glacier.py is totally failing at displaying the inventory of our vault. Let's try the native `aws glacier` tool
[root@hetzner2 ~]# aws glacier describe-vault --account-id - --vault-name deleteMeIn2020 { "SizeInBytes": 290416046788, "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "LastInventoryDate": "2018-07-17T08:46:39.837Z", "NumberOfArchives": 55, "CreationDate": "2018-03-29T21:36:06.041Z", "VaultName": "deleteMeIn2020" } [root@hetzner2 ~]#
- already ^ that's better than glacier.py. I did a listing of the jobs & found that my inventory retrieval succeeded
[root@hetzner2 ~]# aws glacier list-jobs --account-id - --vault-name deleteMeIn2020 { "JobList": [ { "CompletionDate": "2018-07-18T22:54:13.824Z", "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "InventoryRetrievalParameters": { "Format": "JSON" }, "Completed": true, "InventorySizeInBytes": 20532, "JobId": "4JhNUasCqaK2UkHm2CNvrk2kEQeBgUJViT5iHJF6HimCZZIufNbCxTX3GH6lKpeVIDd4iuzzZ6Q8SvXoZo0B3shKLALb", "Action": "InventoryRetrieval", "CreationDate": "2018-07-18T19:09:42.415Z", "StatusMessage": "Succeeded", "StatusCode": "Succeeded" } ] } [root@hetzner2 ~]#
- I got the job's output in json
[root@hetzner2 ~]# aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id '4JhNUasCqaK2UkHm2CNvrk2kEQeBgUJViT5iHJF6HimCZZIufNbCxTX3GH6lKpeVIDd4iuzzZ6Q8SvXoZo0B3shKLALb' output.json { "status": 200, "acceptRanges": "bytes", "contentType": "application/json" } [root@hetzner2 ~]#
- the inventory is there! but it's not easy to read
[root@hetzner2 ~]# cat output.json {"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020","InventoryDate":"2018-07-17T08:46:39Z","ArchiveList":[{"ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:35:48Z","Size":380236,"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"},{"ArchiveId":"lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:50:36Z","Size":280709,"SHA256TreeHash":"3f79016e6157ff3e1c9c853337b7a3e7359a9183ae9b26f1d03c1d1c594e45ab"},{"ArchiveId":"fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:53:00Z","Size":280718,"SHA256TreeHash":"6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4"},{"ArchiveId":"zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:55:04Z","Size":1187682789,"SHA256TreeHash":"c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f"},{"ArchiveId":"Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:57:50Z","Size":877058000,"SHA256TreeHash":"fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b"},{"ArchiveId":"P9wIGNBbLaAoz7xGht6Y4k7j33nGgPmg0RQ4sesN2tImQLjFN1dtkooVGrBnQqbPt8YhgvwUXv8eO_N72KRjS3RrZQYvkGxAQ9uPcJ-zaDOG8kII7l4p7UzGfaroO63ZreHItIW4GA","ArchiveDescription":"hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name","CreationDate":"2018-03-31T22:46:18Z","Size":2299038,"SHA256TreeHash":"2e789c8c99f08d338f8c1c2440afd76c23f76124c3dbdd33cbfa9f46f5c6b2aa"},{"ArchiveId":"o-naX0m4kQde-2i-8JZbEESi7r8OlFjIoDjgbQSXT_zt9L_e7qOH3HQ1R7ViQC3i7M0lVLbODsGZm9w9HfI3tHYKb2R1T_WWBwMxFuC_OhYiPX8uepTvvBg2Mg6KysP9H3zNzwGSZw","ArchiveDescription":"hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates","CreationDate":"2018-03-31T23:47:51Z","Size":12009829896,"SHA256TreeHash":"022f088abcfadefe7df5ac770f45f315ddee708f2470133ebd027ce988e1a45d"},{"ArchiveId":"mxeiPukWr03RpfDr49IRdJUaJNjIWQM4gdz8S8k3-_1VetpneyWZbwEVKCB1uMTYpPy0L6HZgZP7vJ6b7gz1oeszMnlzZR0-W6Rgt4O0BZ_mwgtGHRKOH0SIpMJHRnePaq9SBR9gew","ArchiveDescription":"hetzner1_20171001-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-01T20:20:52Z","Size":2309259,"SHA256TreeHash":"2d2711cf7f20b52a22d97b9ebe7b6a0bd45a3211842be66ca445b83fbc7984e5"},{"ArchiveId":"TOZBeL9sYVRtzy7gsAC1d930vcOhEBaABsh1ejb6vvad_NVSLu_1v0UvWqwkkf7x_8CCu6_WxolooSClZMhQOA21J_0_HP9GxvPkUvdSOeqmHjuANbIS82IRBOjFT4zFUoZnPhcVUg","ArchiveDescription":"hetzner1_20171001-052001.tar.gpg","CreationDate":"2018-04-01T21:42:48Z","Size":12235848201,"SHA256TreeHash":"a9868fdd4015fedbee5fb7b555a07b08d02299607eb64e73da689efb6bbad1ed"},{"ArchiveId":"LdlFgzhEnxVsuGMZU4d2c_rfMTGM_3iCvLUZZSpGmmLArCQLs8HxjWLwfDDeKPKEarvSgXOVA-Evy4Ep5WAzESoofG5jdCidL5OispSfHElpPu-60xbmNvQt9neLGZrwa3C_iESGiw","ArchiveDescription":"hetzner1_20171101-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-02T18:52:49Z","Size":2325511,"SHA256TreeHash":"920247e75ab48e16f18e7c0528778ea95ac0b74ffb18cdb3a68c0538d3e701e4"},{"ArchiveId":"6GHR8GlRG4EIlkA7O_Ta6BAXN3BQ7HmP0V7TgOp6bOa4cxuIlbHkmCd3I2lUSNwfG1penWOibFvvDhzgcihdmUMtCLepT3rl6HtFR5Lv-ro5mIegCcWQJOUDT0FRfsb7e7IkAze02Q","ArchiveDescription":"hetzner1_20171101-062001.tar.gpg","CreationDate":"2018-04-02T20:18:50Z","Size":12385858738,"SHA256TreeHash":"24c67d8686565c9f2b8b3eeacf2b4a0ec756a9f2092f44b28b56d2074d4ad364"},{"ArchiveId":"lryfyQFE4NbtWg5Q6uTq8Qqyc-y9il9WYe7lHs8H2lzFSBADOJQmCIgp6FxrkiaCcwnSMIReJPWyWcR4UOnurxwONhw8fojEHQTTeOpkf6fgfWBAPP9P6GOZZ0v8d8Jz_-QFVaV6Bw","ArchiveDescription":"hetzner1_20171201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-02T20:56:23Z","Size":2332970,"SHA256TreeHash":"366449cb278e2010864432c0f74a4d43bceff2e6a7b2827761b63d5e7e737a01"},{"ArchiveId":"O19wuK1PL_Wwf59-fjQuVP2Con0LXLf5Mk9xQA3HDPw4y1ZdwjYdFzmhZdaMUtGX666YKKjJu823l2C6seOTLg1ZbXZVTqQjZTeZGkQdCSRQdxyo3pEPWE2Iqpgb61FCiIETdCANUQ","ArchiveDescription":"hetzner2_20170702-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T12:29:09Z","Size":2039060,"SHA256TreeHash":"24df13a4150ab6ae7472727769677395c5e162660aea8aa0748833ad85c83e7a"},{"ArchiveId":"6ShVCMDoqdhc4wg84L1bXaq3O2InX-qB9Q9NMRH-xJQ0_TSlIN5b3fysow9-_RuNYc2lK958NrwFiIEa7Q0bVaT9LaZQH8WtoTqnX3DN2xJhb4_KUdu6iUaDdJUoPfsSXtC7xvPb-w","ArchiveDescription":"hetzner2_20170702-052001.tar.gpg","CreationDate":"2018-04-04T15:52:53Z","Size":21323056209,"SHA256TreeHash":"55030f294360adf1ed86e6e437a03432eb990c6d9c3e6b4a944004ad88d678e8"},{"ArchiveId":"0M5MSxjrlWJiT0XrncbVBITR__anuTLeOhcq9XvqsX0Q1koa0K0bH-wrZOQO7YsqqPv5Te3AUXPOCzIO6F0g5DQ2tOZq8E_YHX0XmMGjnOfeHIV9m_5GiCQAi3PrUuWM3C4cApTs7A","ArchiveDescription":"hetzner2_20170801-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T15:54:20Z","Size":198754,"SHA256TreeHash":"978f15fec9eef2171bdddf3da501b436d3bc3c7e518f2e70c0a1fd40ccf20d2a"},{"ArchiveId":"fwR6U5jX2T9N4mc14YQNMoA52vXICj-vvgIvYyDO5Qcv-pNeuXarT4gpzIy-XjuuF4KXkp9BXD13AA3hsau9PfW0ypy874m7arznCaMZO8ajm3NIicawZMiHGEikWw82EGY0z4VDIQ","ArchiveDescription":"hetzner2_20170801-072001.tar.gpg","CreationDate":"2018-04-04T16:08:27Z","Size":1746085455,"SHA256TreeHash":"6f3c5ede57e86991d646e577760197a01344bf013fb17a966fd7e2440f4b1062"},{"ArchiveId":"EZG83EoQ65jxe4ye0-0qszEqRjLE3lAb2Vi7vZ2eYvj1bVJnTc5kvfWgTxl4_w2G1PPk4pn6g2dIsYXosWk3OqWNaWNcYEOHEkNREHycnTpcl0rBkWJoimt9fCKLJCF7FiGavWUMSw","ArchiveDescription":"hetzner2_20170901-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:09:29Z","Size":287980,"SHA256TreeHash":"b72f11bb747ddb47d88f0566122a1190caf569b2b999ed22b8a98c6194ac4a0e"},{"ArchiveId":"5xqn4AAJhxnOHLJMfkvGX3Qksj5BTiEyHURILglfH0TPh_GfvbZNHqzdYIW-8sMtJ8OQ1GnnFqAOpty5mMwOSEjaokWkrQhEZK9-q7FBKDXXglAlqQKEJpd2UcTQI47zBEmGRasm-A","ArchiveDescription":"hetzner2_20170901-072001.tar.gpg","CreationDate":"2018-04-04T16:27:43Z","Size":1800393587,"SHA256TreeHash":"87400a80fc668b03ad712eaf8f6096172b5fc0aaa81309cc390dd34cc3caecec"},{"ArchiveId":"3XL4MENpH6i2Dp6micFWmMR2-qim3D1LQGiyQHME_5_A5jAbepw7WDEJOS2m2gIudSXfCuyclHTqzZYEpr6RwTGIEmYGw1jQ-EDPWYzjGTGDJcwWZEiklTmhLgvezqfyeSnQsdQZtA","ArchiveDescription":"hetzner2_20171001-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:29:10Z","Size":662050,"SHA256TreeHash":"506877424ae5304ca0c635d98fb8d01ad9183ec46356882edf02d61e9c48da8d"},{"ArchiveId":"g8RFNrkxynpQ8Yt9y4KyJra09dhxd3fIJxDlaUeDYBe615j7XON8gAdHMAQVerPQ4VF10obcuHnp64-kJFMmkG722hrlp3QBKy262CD4CcSUTSk3m070Mz6q3xySkcPzqRyxDwjtYg","ArchiveDescription":"hetzner2_20171001-072001.tar.gpg","CreationDate":"2018-04-04T16:51:09Z","Size":2648387053,"SHA256TreeHash":"1bf72e58a8301796d4f0a357a3f08d771da53875df4696ca201e81d1e8f0d82b"},{"ArchiveId":"ktHLXVqR5UxOoXEO5uRNMrIq4Jf2XrA6VmLQ0qgirJUeCler9Zcej90Qyg9bHvhQJPreilT4jwuW08oy7rZD_jnjd_2rcdZ11Y5Zl3V25lSKdRPM-b21o21kaBEr_ihhlIxOmPqJXg","ArchiveDescription":"hetzner2_20171101-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:51:40Z","Size":280741,"SHA256TreeHash":"f227ecd619df1564f2bb835029864fad804461db5496f6825a76e4720e3098a7"},{"ArchiveId":"iUmKTuLdEX3By9oHoqPtJd4KpEQ_2xh5PKV4LPuwBDcXyZmtt4zfq96djdQar1HwYmIh64bXEGqP7kGc0hk0ZtWZc12TtFUL0zohEbKBYr2VFZCQHjmc461TMLskKsOiyd6HbuKUWg","ArchiveDescription":"hetzner2_20171101-072001.tar.gpg","CreationDate":"2018-04-04T16:59:35Z","Size":878943524,"SHA256TreeHash":"7cf75fb3c67f0539142708a4ff9c57fdf7fd380283552fe5104e23f9a0656787"},{"ArchiveId":"6gmWP3OdBIdlRuPIbNpJj8AiaR-2Y4FaPTneD6ZwZY2352Wfp6_1YNha4qvO1lapuITAhjdh-GzKY5ybgJag8O4eh8jjtBKuOg3nrjbABpeS7e6Djc-7PEiMKskaeldv5M52gHFUiA","ArchiveDescription":"hetzner2_20171202-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:00:09Z","Size":313055,"SHA256TreeHash":"cfac22e7a2b59e28fe13fb37567d292e5ee1e9c06da264573091e26a2640a161"},{"ArchiveId":"4Ti7ZVFaexAncEDgak5Evp97aQk45VLA6cix3OCEB1cuGM6akGq2pINO8bzUjhEV8nvpqLLqoa_MSxPWTFl4uQ8sPUCDqG0vayB8PhYHcyNES09BQR9cE2HlR7qfxMDl5Ue946jcCw","ArchiveDescription":"hetzner2_20171202-072001.tar.gpg","CreationDate":"2018-04-04T17:12:23Z","Size":1046884902,"SHA256TreeHash":"d1d98730e5bb5058ac96f825770e5e2dbdbccb9788eee81a8f3d5cb01005d4e5"},{"ArchiveId":"GSWslpTGXPiYW5-gJZ4aLrFQGfDiTifPcqsSbh8CZc6T4K8_udBkSrNV0GNZQB9eLoRrUC5cXYT06FSvZ8kltgM61VUVYOXvO0ox4jYH68_sjHnkUmimk8itpa34hBC_c0zS0ZFRLQ","ArchiveDescription":"hetzner2_20180102-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:13:04Z","Size":499163,"SHA256TreeHash":"dfbc8647e0c402d1059691322ba9f830d005eae75d38840792b5cb58cd848c76"},{"ArchiveId":"3nDMsn_-0igfg6ncwMqx3-UxQLi-ug6LEoBxqyZKsMhd83PPoJk1cqn6QFib2GeyIgJzfCZoTlwrpe9O0_GnrM7u_mUEOsiKTCXP0NadvULehNcUx-2lWQpyRrCiDg5fcBb-f7tY0g","ArchiveDescription":"hetzner2_20180102-072001.tar.gpg","CreationDate":"2018-04-04T17:22:57Z","Size":1150541914,"SHA256TreeHash":"9ca7fa55632234d3195567dc384aaf2882348cccb032e7a467291f953351f178"},{"ArchiveId":"CnSvT3qmkPPY7exbsftSC-Ci71aqjLjiL1eUa3hYo3OfVkF4s2SQ8n39rH5KaQwo3GTHeJZOVoBTW9vMEf2ufYKc9e_eVAfVcmG-bLgncRQrrV-DlE2hYglzdAalx3H5OXBY8jlD9Q","ArchiveDescription":"hetzner2_20180202-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:31:24Z","Size":2480097,"SHA256TreeHash":"ae746d626f04882c2767e9db7dd1ffd735b3e08bc6de5877b0b23174f14cc1ff"},{"ArchiveId":"WWIYVa-hqJzMS8q1UNNZIKfLx1V3w3lzqpCLWwflYBM7yRocX2CEyFA-aY2EKJt0hRLTshgLXE3L3Sni8bYabDLBrV2Gehgq9reRTRhn8cxoKks4f1NmZwCCTSs6L4bQuJnjjNvOKw","ArchiveDescription":"hetzner2_20180302-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T18:36:50Z","Size":3530165,"SHA256TreeHash":"52f24fe7626401804799efc0407b892b7a0188f8f235d183228c036a5544d434"},{"ArchiveId":"XQYjqYnyYbKQWIzc1BSWQpn2K8mIoPQQH-bnoje7dB3BGCbzTjbEATGYSV1qJMbeUhiT_b7lwDiZzW1ZEbHVCgMDrWxCswG3eTZxiFdSwym7rELpFh5eC7XQlxuHjHocLY2zbUhYvg","ArchiveDescription":"hetzner2_20180401-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T22:19:13Z","Size":1617586,"SHA256TreeHash":"21c578c4b99abab6eb37cb754dd36cdcc71481613bf0031886cca81cd87c8d6b"},{"ArchiveId":"kn9SKSliFV1eHh_ax1Z9rEWXR5ETF3bhdoy6IuyItI3w63rBgxaNVNk5AFJLpcR2muktNFmsSEp8QucM-B4tMdFD6PtE4K8xPJe_Cvhv3G4e2TPKn3d9HMD5Bx3XjTimGHK6rHnz0A","ArchiveDescription":"hetzner2_20180401-072001.tar.gpg","CreationDate":"2018-04-04T22:43:39Z","Size":2910497961,"SHA256TreeHash":"e82e8df61655c53a27f049af8c97df48702d5385788fb26a02d37125d102196a"},{"ArchiveId":"4-Rebjng1gztwjx1x5L0Z1uErelURR5vmCUGD3sEW6rBQRUHRjyEQWL22JAm6YPpCoBwIxzVDPyC2NvSofxx2InjmixAUoQsyy3zAgGoW0nSlqNQPfeF1hkRdOCyIDutfMTQ1keEQw","ArchiveDescription":"hetzner1_20170701-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T00:36:36Z","Size":2430229,"SHA256TreeHash":"e84e7ff3fb15b1c5cf96b3f71ee87f88c39798aea3a900d295114e19ffa0c29f"},{"ArchiveId":"OVSNJHSIy5f1WRnisLdZ9ElWY4AjdgZwFqk3vDISCtypn5AHVo7wDGOAL76SpF0XzAd-yLgD3fIzf7mvgR4maA_HCANBhIP7Sdvhi7MLMjLnXLoKoHuKayBok_VLNRFfT5XORaTemA","ArchiveDescription":"hetzner1_20170801-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T03:52:16Z","Size":2485018,"SHA256TreeHash":"27ee0c5d5f20b87ff9c820dac1e5f3e989ab4ba679e94a8034a98d718564a5cd"},{"ArchiveId":"N1TB1zWhwJq20nTRNcIzVIRL9ms1KnszY0C4XAKhfTgtuWaV1SFWlqaA0xb6NjbX6N3XDisuP0bke-I0G_8RbsFQ_PcRTwRZzNEbr4LOU4WFhLM86s-FjDwjdJHmgyttfMh_1K9RLQ","ArchiveDescription":"hetzner1_20180101-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T07:28:56Z","Size":2349744,"SHA256TreeHash":"943aa9704177a35cd45ae11b705d9e238c9e6af1c86bc6ebed46c0ae9deff97a"},{"ArchiveId":"wJyG1vWz9IcB8-mnLm9bY3do9KIsxNY9nQ8ClQaOALesN-k3R5GU11p7Q3sVeStelg9IzWvburDcVFdHmJIYHC9RuRbuSZbk_rQvxxrkhtDcviu4i9_hN4SnPHvV3i0hITuiEFGpkA","ArchiveDescription":"hetzner1_20180201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T10:07:11Z","Size":2414523,"SHA256TreeHash":"704e59a90b38e1470a7a647f21a9c0e2887b84d73af9cc451f1b0b1c554b5cb7"},{"ArchiveId":"hPtzfNk9SSUpI-_KihUEQOb89sbrK3tr0-3au-pe7al_e8qetM7uQEbNTH4_oWPqD2yajF79XPXxi4wkqAcQjoAN4IhnkPVb846wODKTpFXkRs9V8lz6nW0t_GdR2c9uYXf-xM_MpQ","ArchiveDescription":"hetzner1_20180201-062001.tar.gpg","CreationDate":"2018-04-05T13:47:38Z","Size":28576802304,"SHA256TreeHash":"dd538e88ce29080099ee59b34a7739885732e1bb6dfa28fe7fa336eb3b367f47"},{"ArchiveId":"osvrVQsHYSGCO30f0kO9aneACAA8h80KBmqfBMqDG3RioepW6ndLlNBcSvhfQ2nrcWBwLabIn4A7Rkr7sjbddViPo92viBh4lyZdyDwVcm6Pp1hQv-p2j0vldxYLWpyLDflQ8QRn4A","ArchiveDescription":"hetzner1_20180301-062002.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T15:05:32Z","Size":2436441,"SHA256TreeHash":"b3e6651391632d17ecc729255472cd5affaea7c2c65585a5d71d258199d6af48"},{"ArchiveId":"OtlG0WN4qd8kIg3xRQvRHoAzICwHRg6S3I8df5r_VRKaUNzJCsnwbO8Z9RiJPAAqqqVqg9I_GKhnt7txvEdUjx5s9hLywWm_OcRm5Lj_rJV_dupUwVlTG8HsdnCIwFseGa1JD5bviw","ArchiveDescription":"hetzner1_20180301-062002.tar.gpg","CreationDate":"2018-04-05T18:57:24Z","Size":29224931379,"SHA256TreeHash":"3a6b009477ffe453f5460ab691709ce0dcdf6e9ae807a43339b61f0e6c5785ab"},{"ArchiveId":"2PAyQClvhEMhO-TxdAvV9Qdqa_Lvh4webx9hHIXbVnQQHJxMlhWPikmVpr1zTQRgy23r-WcOouH6gLKQ7WBRSH5yM8q5f8gb0Z2anOAwdR4A9DtxqDIVtI78-7Bs3Bf2b0fYbPQCWw","ArchiveDescription":"hetzner1_20180401-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T19:31:28Z","Size":2231140,"SHA256TreeHash":"a8a2712abf9434fa38d9aa3eb52c69accffdb4a2abe79425c3057d537b014a47"},{"ArchiveId":"Gn7a5jzeimXwa3su0i02OAK2XFmK9faX2WZx77Zq_tOx6j7ihpFEnkuF97Dpo66NgF7M24orh50kMSphvzLex_NbP9tDNoOI8mYG0-7GzOmNSmw9NaZpMLGn9NAVKbxs0byJ3YkquA","ArchiveDescription":"hetzner1_20180401-052001.tar.gpg","CreationDate":"2018-04-05T21:05:59Z","Size":12475250059,"SHA256TreeHash":"e256db8915834ddc3c096f1f3b9e4315bb56857b962722fb4093270459ed1116"},{"ArchiveId":"UqxNCpEu1twmhb9qLPxpQXMBv6yLyR37rZ1T_1tQjdl8x0RwukdIoOEGcmpHwdtrJgTA2OrWZ3ZYTncxkXojwWAOROW-wJ4SJANFfxwvGfueFNUSn17qTggcqeE43I5P1xmlxb25wg","ArchiveDescription":"hetzner1_20170701-052001.tar.gpg","CreationDate":"2018-04-07T19:26:56Z","Size":40953093076,"SHA256TreeHash":"5bf1d49a70b4031cb56b303be5bfed3321758bdf9242c944d5477eb4f3a15801"},{"ArchiveId":"NR3Z9zdD2rW0NG1y3QW735TzykIivP_cnFDMCNX6RcIPh0mRb_6QiC5qy1GrBTIoroorfzaGDIKQ0BY18jbcR3XfEzfcmrZ1FiT1YvQw-c1ag6vT46-noPvmddZ_zyy2O1ItIygI6Q","ArchiveDescription":"hetzner1_20171201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-07T21:02:35Z","Size":2333066,"SHA256TreeHash":"9775531693177494a3c515e0b3ab947f4fd6514d12d23cb297ff0b98bc09b1be"},{"ArchiveId":"3wjWOHj9f48-L180QRkg7onI5CbZcmaaqinUYJZRheCox-hc021rQ3Tl1Houf0s5W-qzk6HVRz3wkilQI_TAi2PXWaFUMibz00DAQfGj9ZQKeSOlxE_3qsIRcmYsYo-TMaU2UsSqNA","ArchiveDescription":"hetzner1_20171201-062001.tar.gpg","CreationDate":"2018-04-07T21:55:57Z","Size":12434596732,"SHA256TreeHash":"c10ce8134ffe35ba1e02d6076fc2d98f4bb3a288a5fe051fcb1e33684365ee19"},{"ArchiveId":"OfCmIMVetV8SxOBYUGFWldcHWFaFuGeLrYYm3A4YrvUU93zBrCLkOoBssToY1QIt_ZGwIueTgyoLTADetpfgswaoou_CwD8xfqss1hQAbQ7CaKW6sQHD-kcw4ii-D1h22lap95AZ4g","ArchiveDescription":"hetzner2_20180202-072001.tar.gpg","CreationDate":"2018-04-07T23:39:13Z","Size":14556435291,"SHA256TreeHash":"456b44a88a8485ceaf2080b15f0b6b7e6728caaec6edf86580369dfe91531df9"},{"ArchiveId":"PLs1lsB4c1dV3YaBG1y2SN3OEWmtImJVlz6CA6IknA6y3R8yfQV3FXcLXWC_YpczM6t05xigcynA7m1A6GkuHIyTDOr6-DCOLlEvxDHmFrA4hrzJkl2pLquNWJ9yc-JC83ZV4SkM-Q","ArchiveDescription":"hetzner2_20180302-072001.tar.gpg","CreationDate":"2018-04-08T01:47:51Z","Size":26269217831,"SHA256TreeHash":"c5b96e3419262c4c842bd3319d95dd07c30a1f00d57a2a2a6702d1d502868c98"},{"ArchiveId":"QwTHHmRo-NpqTTe2uy87GgB2MVydnz--3-3Z5u_0gdh5FPxEl2YSyjmJy3CKNDmJaNtrmwLeRF4_GubyZFc-CzlWl6OqZmINkCVSz34wY-k336C8HUOoKm5tPV3riSYaPb7WjjXwNQ","ArchiveDescription":"hetzner1_20170801-052001.tar.gpg","CreationDate":"2018-04-08T09:10:31Z","Size":41165282020,"SHA256TreeHash":"3fca81cf39f315fb2063714fffa526a533a7b3a556a6c4361a6ca3458ed66f29"},{"ArchiveId":"EmeH9kAWeVAyMa68pIknrJ135ZyXKB8WcjVKGQ58cVQE4Q98SMsX1OerOA4-_Q6epBJ_hgUT7ztFQ5d6PNiPRJ3H8uUIqXG3pkve5MaeA_cqAqvu4apBhU2HgALb1iS3NKy5IRdeUg","ArchiveDescription":"hetzner1_20180101-062001.tar.gpg","CreationDate":"2018-04-08T10:22:21Z","Size":12517449646,"SHA256TreeHash":"27e393d0ece9cadafa5902b26e0685c4b6990a95a7a18edbce5687a2e9ed4c55"},{"ArchiveId":"NX074yaGa7FGL_QH5W9-mZ9TVmi0T2428B1lW8aEck6Ydjk3H3W6pgQTisOqE9B7azs1jykJ_IL-fdbkLzhAmrpWNGJBq5hVjfMNSP-Msm976Mf7mnXe6Z6QDkO5PVXaFsNZ1EzNyw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:29:47Z","Size":258332,"SHA256TreeHash":"7541016c23b31391157a4151d9277746585b220afdf1e980cafe205474f796d4"},{"ArchiveId":"uHk-GTBb6LVulxOkgs_ZYdF-cvKubUpvdP7hoS9Cqduw8YPInJaHB4LbBHpIxOL1idfYoMm-h4YI_Jq8qN3EnOBHiAjqUEwJAstagfMEvk2E38IlNLu_5J_09E0JM7MZXc4RSEZfNA","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:35:50Z","Size":7277,"SHA256TreeHash":"f431faa85f3d3910395e012c9eeeba097142d778b80e1d5c0a596970cc9c2fe6"},{"ArchiveId":"n8UslfWy3wmFYZNYJF3PfuxVoLNORVes-IunJoyzKJDYMNqmkwybrG9KVGoL4sbRspq0Tqmccn87hLGZ_A7kjBB6fvnWuAOjALhNinbDe-RkESPVPWN6464vfCIf3BI3NhK0_nzCNw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:35:52Z","Size":258332,"SHA256TreeHash":"2846f515119510f2d03f84d7aadc8648575155d2388251d98160c30d0b67cce8"},{"ArchiveId":"lzlnWYAQWMFp32BM163QS_8kb9wJ_kqaal2XmVb_rXLRDDXhSogYZCanA7oWyi3IdlWECd8R3KT3s50gJo8_kckLtq2uUUjG3Yl1wJuvXQfVh1AwzPOtLlyldqXmDoiVFzw-NrkpIw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:40:35Z","Size":7277,"SHA256TreeHash":"9dd14621a97717597c34c23548da92bcdf68960758203d5a0897360c3211a10c"},{"ArchiveId":"WimEI6ABJtXx6j4jVK4lrVKWbPmI1iPZLErplyB7ChN6MSOH3yMOeANT7L3O6BBI4G17WjSIKE6EN6YgMP9OdgxF4XjHyyGUuWNwqy-nnIETKyp7YrFuuBkSiSloBhZkC6DRqpdrww","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:40:38Z","Size":258332,"SHA256TreeHash":"81c9683d541ede1b4646b19b2b3fa9be9891251a9599d2544392a5780edaa234"},{"ArchiveId":"k-Q9oBnWeC3P7zOEN6IMEVFjl3EwPkqi5kbEvEqby4zKEpb_aDj4f88Us1X7QBvG3Pi8GUriEnNlXXlNH5s4-4cBfQryVjY_MOAnSakhgCLXs-srczsWIZvtkkMsh4XFiBpVzYao3w","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:42:46Z","Size":7259,"SHA256TreeHash":"0aeed31f85186f67b0a5f457d4fbfe3d82e27fc8ccb5df8289a44602cb2b2b18"},{"ArchiveId":"Y7_nQQC2uSB7XXEfd_zaKpp_gqGPZ_TQTXDPLmSP8k77n9NImLnTL7apkE6AlJopAkgmPiLOaTgIXc4_mSkUFp5teSOxdPxk19Cvs2fL9S1Yv5U7wihZfrsrwNffyZl289J59G-UBg","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:42:49Z","Size":258332,"SHA256TreeHash":"c5b99e4ad85ce2eddda44a72243a69fa750ae25fdcffa26a70dcabfa1f4218c8"},{"ArchiveId":"zW6rdGwDojNoD-CjYUf8p_tbX2UMPHedXwUAM4GxNRkO0GoE1Ax5rpGr38LTnzZ_rCX-4F3kdJiAm1ahm-CfAzefUxenayuoS6cg384s5UHbZGsD2QpogBj9EJDDWlzrj8hr8DPC1Q","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:45:28Z","Size":7259,"SHA256TreeHash":"dcf5c7bfbdeb25d789671a951935bf5797212963c582099c5d9da75ae1ecfccd"},{"ArchiveId":"W9argz7v3GxZUkItGwILf1QRot8BNbE4kOJVvUrwOGs72KGog0QCGc8PV-3cWUvhfxkCFLuoZE7qJCQmT2Cc_LvaV46hWFFvgs5TFBdIySr2jeil-d8cYR5oN9zAvkYCGuvDlXmgxw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:45:31Z","Size":258328,"SHA256TreeHash":"75b3cdc350fb05d69874584ed6f3374a32b9142267ca916a0db6fc535711f24a"}]}[root@hetzner2 ~]#
- some bash magic makes the json easier to parse, and we see that my test file is there, but--as I was adjusting the script--I accidentally re-uploaded it 4 and a half times. I doesn't make sense to delete these archives, and that's why I tested the first upload with the smallest one = 260KB. So this redundancy is a negligible cost https://stackoverflow.com/questions/1955505/parsing-json-with-sed-and-awk
[root@hetzner2 ~]# cat output.json | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' "VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020" "InventoryDate":"2018-07-17T08:46:39Z" "ArchiveList":["ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA" "ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name" "CreationDate":"2018-03-31T02:35:48Z" "Size":380236 "SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b" ... "ArchiveId":"QwTHHmRo-NpqTTe2uy87GgB2MVydnz--3-3Z5u_0gdh5FPxEl2YSyjmJy3CKNDmJaNtrmwLeRF4_GubyZFc-CzlWl6OqZmINkCVSz34wY-k336C8HUOoKm5tPV3riSYaPb7WjjXwNQ" "ArchiveDescription":"hetzner1_20170801-052001.tar.gpg" "CreationDate":"2018-04-08T09:10:31Z" "Size":41165282020 "SHA256TreeHash":"3fca81cf39f315fb2063714fffa526a533a7b3a556a6c4361a6ca3458ed66f29" "ArchiveId":"EmeH9kAWeVAyMa68pIknrJ135ZyXKB8WcjVKGQ58cVQE4Q98SMsX1OerOA4-_Q6epBJ_hgUT7ztFQ5d6PNiPRJ3H8uUIqXG3pkve5MaeA_cqAqvu4apBhU2HgALb1iS3NKy5IRdeUg" "ArchiveDescription":"hetzner1_20180101-062001.tar.gpg" "CreationDate":"2018-04-08T10:22:21Z" "Size":12517449646 "SHA256TreeHash":"27e393d0ece9cadafa5902b26e0685c4b6990a95a7a18edbce5687a2e9ed4c55" "ArchiveId":"NX074yaGa7FGL_QH5W9-mZ9TVmi0T2428B1lW8aEck6Ydjk3H3W6pgQTisOqE9B7azs1jykJ_IL-fdbkLzhAmrpWNGJBq5hVjfMNSP-Msm976Mf7mnXe6Z6QDkO5PVXaFsNZ1EzNyw" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg" "CreationDate":"2018-07-17T00:29:47Z" "Size":258332 "SHA256TreeHash":"7541016c23b31391157a4151d9277746585b220afdf1e980cafe205474f796d4" "ArchiveId":"uHk-GTBb6LVulxOkgs_ZYdF-cvKubUpvdP7hoS9Cqduw8YPInJaHB4LbBHpIxOL1idfYoMm-h4YI_Jq8qN3EnOBHiAjqUEwJAstagfMEvk2E38IlNLu_5J_09E0JM7MZXc4RSEZfNA" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg" "CreationDate":"2018-07-17T00:35:50Z" "Size":7277 "SHA256TreeHash":"f431faa85f3d3910395e012c9eeeba097142d778b80e1d5c0a596970cc9c2fe6" "ArchiveId":"n8UslfWy3wmFYZNYJF3PfuxVoLNORVes-IunJoyzKJDYMNqmkwybrG9KVGoL4sbRspq0Tqmccn87hLGZ_A7kjBB6fvnWuAOjALhNinbDe-RkESPVPWN6464vfCIf3BI3NhK0_nzCNw" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg" "CreationDate":"2018-07-17T00:35:52Z" "Size":258332 "SHA256TreeHash":"2846f515119510f2d03f84d7aadc8648575155d2388251d98160c30d0b67cce8" "ArchiveId":"lzlnWYAQWMFp32BM163QS_8kb9wJ_kqaal2XmVb_rXLRDDXhSogYZCanA7oWyi3IdlWECd8R3KT3s50gJo8_kckLtq2uUUjG3Yl1wJuvXQfVh1AwzPOtLlyldqXmDoiVFzw-NrkpIw" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg" "CreationDate":"2018-07-17T00:40:35Z" "Size":7277 "SHA256TreeHash":"9dd14621a97717597c34c23548da92bcdf68960758203d5a0897360c3211a10c" "ArchiveId":"WimEI6ABJtXx6j4jVK4lrVKWbPmI1iPZLErplyB7ChN6MSOH3yMOeANT7L3O6BBI4G17WjSIKE6EN6YgMP9OdgxF4XjHyyGUuWNwqy-nnIETKyp7YrFuuBkSiSloBhZkC6DRqpdrww" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg" "CreationDate":"2018-07-17T00:40:38Z" "Size":258332 "SHA256TreeHash":"81c9683d541ede1b4646b19b2b3fa9be9891251a9599d2544392a5780edaa234" "ArchiveId":"k-Q9oBnWeC3P7zOEN6IMEVFjl3EwPkqi5kbEvEqby4zKEpb_aDj4f88Us1X7QBvG3Pi8GUriEnNlXXlNH5s4-4cBfQryVjY_MOAnSakhgCLXs-srczsWIZvtkkMsh4XFiBpVzYao3w" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg" "CreationDate":"2018-07-17T00:42:46Z" "Size":7259 "SHA256TreeHash":"0aeed31f85186f67b0a5f457d4fbfe3d82e27fc8ccb5df8289a44602cb2b2b18" "ArchiveId":"Y7_nQQC2uSB7XXEfd_zaKpp_gqGPZ_TQTXDPLmSP8k77n9NImLnTL7apkE6AlJopAkgmPiLOaTgIXc4_mSkUFp5teSOxdPxk19Cvs2fL9S1Yv5U7wihZfrsrwNffyZl289J59G-UBg" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg" "CreationDate":"2018-07-17T00:42:49Z" "Size":258332 "SHA256TreeHash":"c5b99e4ad85ce2eddda44a72243a69fa750ae25fdcffa26a70dcabfa1f4218c8" "ArchiveId":"zW6rdGwDojNoD-CjYUf8p_tbX2UMPHedXwUAM4GxNRkO0GoE1Ax5rpGr38LTnzZ_rCX-4F3kdJiAm1ahm-CfAzefUxenayuoS6cg384s5UHbZGsD2QpogBj9EJDDWlzrj8hr8DPC1Q" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg" "CreationDate":"2018-07-17T00:45:28Z" "Size":7259 "SHA256TreeHash":"dcf5c7bfbdeb25d789671a951935bf5797212963c582099c5d9da75ae1ecfccd" "ArchiveId":"W9argz7v3GxZUkItGwILf1QRot8BNbE4kOJVvUrwOGs72KGog0QCGc8PV-3cWUvhfxkCFLuoZE7qJCQmT2Cc_LvaV46hWFFvgs5TFBdIySr2jeil-d8cYR5oN9zAvkYCGuvDlXmgxw" "ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg" "CreationDate":"2018-07-17T00:45:31Z" "Size":258328 "SHA256TreeHash":"75b3cdc350fb05d69874584ed6f3374a32b9142267ca916a0db6fc535711f24a"] [root@hetzner2 ~]#
- anyway, I'm satisfied that the upload script is working. I went ahead and executed it to upload the remaining archives. Hopefully by next week I can validate that those have been uploaded too.
- ...
- continuing with duplicity, I tried to pass the '--batch --password-file' options to gpg so that I could use a 4K symmetric key file rather than enter a unicode-limited (ascii-limited?) passphrase. It didn't work. I found a bug report from 8 years ago about the issue. I asked for a status updated. https://bugs.launchpad.net/duplicity/+bug/503305
[root@hetzner2 ~]# duplicity --gpg-options '--batch --passphrase-file /root/backups/ose-backups-cron.key' /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg b2://<obfuscated>:<obfuscated>@ose-server-backups/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: none GnuPG passphrase:
- that ^ ticket says the workaround is to just cat the file to a var named "PASSPHRASE"
[root@hetzner2 ~]# export PASSPHRASE="`cat /root/backups/ose-backups-cron.key`" [root@hetzner2 ~]# duplicity /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg b2://<obfuscated>:<obfuscated>@ose-server-backups/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. --------------[ Backup Statistics ]-------------- StartTime 1532034857.35 (Thu Jul 19 21:14:17 2018) EndTime 1532034857.35 (Thu Jul 19 21:14:17 2018) ElapsedTime 0.00 (0.00 seconds) SourceFiles 1 SourceFileSize 7642 (7.46 KB) NewFiles 1 NewFileSize 7642 (7.46 KB) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 1 RawDeltaSize 7642 (7.46 KB) TotalDestinationSizeChange 7873 (7.69 KB) Errors 0 ------------------------------------------------- [root@hetzner2 ~]#
- ^ that appears to have worked. I'll attempt a restore next week.
Wed Jul 18, 2018
- hetzner got back to be again about hetzner1; they said they can turn off the server, but we have no means to toggle the power state in konsoleH. I told them to turn it off, but to be sure to leave it idle/unused so that we can power it on incase we discover any issues. I told them to *not* cancel the contract, but that we would do so after we discovered that nothing broke on our end with the server offline.
- I could probably have them type a set of iptables rules in for me, but the lag time for each back-and-forth with hetzner is 24 hours. So if there were any issues with the iptables rules, it could take days to resolve. It's much easier to say "turn on/off the server" than to instruct them on iptables rules. The benefits of iptables are the low-impact of its action & quick reversal with the rules flush `iptables -F`. But that benefit negated by this delay, so I'll just stick to "turn it off" and "turn it on" requests.
- meanwhile, as far as the backups of hetzner1. The first archive is still being validated. It looks like the job to snc the vault's contents finished, but it's still listed as empty!
[root@hetzner2 ~]# glacier.py --region us-west-2 job list [root@hetzner2 ~]# glacier.py --region us-west-2 vault list deleteMeIn2020 [root@hetzner2 ~]# glacier.py --region us-west-2 archive list deleteMeIn2020 [root@hetzner2 ~]# date Wed Jul 18 18:44:14 UTC 2018 [root@hetzner2 ~]#
- I logged into the aws console, which shows that there's 55 archives with 270.5G as of the last inventory of Jul 17, 2018 4:46:39 AM
- perhaps the issue is that I just initiated the sync job before the archive
- I checked my logs from when I was working with glacier.py in the past on hancock, and tried those commands, but it also returned an empty set https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q1#Sat_Mar_31.2C_2018
hancock% cd ~/sandbox/glacier-cli hancock% export AWS_ACCESS_KEY_ID='CHANGME' hancock% export AWS_SECRET_ACCESS_KEY='CHANGEME' hancock% /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py archive list deleteMeIn2020 hancock%
- following the documentation page I wrote, I tried again. This time using the '--max-age=0' arg omitted last time https://wiki.opensourceecology.org/wiki/OSE_Server
[root@hetzner2 ~]# glacier.py --region us-west-2 vault sync --max-age=0 --wait deleteMeIn2020
Tue Jul 17, 2018
- hetzner got back to me asking about dropping all non-ssh traffic via iptables. They advised me against it saying there was no security benefit to doing so (lol) , and said I had no means of doing it without their assistance. I responded saying that we indeed _do_ want to drop all non-ssh traffic with iptables as it's part of our decommissioning process. I asked what options I had based on our needs.
Hi Emanuel, Yes, we would like to block all non-ssh traffic to the server, not just some subset of services such as databases. We are in the process of decommissioning this server. We believe that we have finished migrating all necessary services off of this server. To be safe, I would like to leave the server in an online state with all non-ssh traffic blocked via iptables for about a month prior to canceling its contract. I prefer iptables because--if we discover that something breaks after we drop all non-ssh traffic--we can merely flush the iptables rules or stop the iptables service to fully restore the server to its prior state. The next best option would be to turn the server off, but I saw no means to control firewall rules or even toggle the power on the server from konsoleH. Is there a way to enable a firewall on this server or turn it off for about a month before we cancel its contract? Or is there some other option we have for making this server entirely inaccessible to the Internet (other than ssh), but such that we can easily restore the server to its prior fully-functioning state if we discover a need for it? Thank you, Michael Altfield Senior System Administrator PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7 70D2 AA3E DF71 60E2 D97B Open Source Ecology www.opensourceecology.org On 07/17/2018 03:34 AM, Managed Server Support wrote: > Dear Mr. Jakubowski > > Closing all ports on the managed server is barely not possible. There is no need for security reasons to do this. We cant disable the MySQL and PostgreSQL port if you want so. > >> Please tell me how I can configure iptables on our server to obtain the >> above requirements. > You cant do this by your own. > > > If we can be of any further assistance, please let us know. > > > Mit freundlichen Grüßen / Kind regards > > Emanuel Wiesner > > Hetzner Online GmbH > Industriestr. 25 > 91710 Gunzenhausen / Germany > Tel: +49 9831 505-0 > Fax: +49 9831 505-3 > www.hetzner.com > > Registergericht Ansbach, HRB 6089 > Geschäftsführer: Martin Hetzner > > 16.07.2018 20:51 - elifarley@opensourceecology.org marcin@opensourceecology.org catarina@openmaterials.org tom.griffing@gmail.com michael@opensourceecology.org schrieb: >
- I tried to get a listing of the archives in the glacier vault, but it was still empty. it looks like the inventory job is still pending (since 15 hours ago)..
[root@hetzner2 ~]# glacier.py --region us-west-2 vault list deleteMeIn2020 [root@hetzner2 ~]# glacier.py --region us-west-2 archive list deleteMeIn2020 [root@hetzner2 ~]# [root@hetzner2 ~]# glacier.py --region us-west-2 job list i/d 2018-07-17T04:28:46.003Z deleteMeIn2020 [root@hetzner2 ~]# date Tue Jul 17 19:32:59 UTC 2018 [root@hetzner2 ~]#
- ...
- marcin got back to me about Janus. He said he couldn't get jangouts to work. I tried, and I get the error "Janus error: Probably a network error, is the gateway down?: [object Object] Do you want to reload in order to retry?". I think this is because the janus gateway is still running on the old self-signed cert, instead of the let's encrypt cert (which I did setup nginx to use)
- I changed the certs in both files at /opt/janus/etc/janus/{janus.cfg,janus.transport.http.cfg}
[root@ip-172-31-28-115 janus]# date Tue Jul 17 20:03:50 UTC 2018 [root@ip-172-31-28-115 janus]# pwd /opt/janus/etc/janus [root@ip-172-31-28-115 janus]# grep 'cert_' *.cfg janus.cfg:;cert_pem = /opt/janus/share/janus/certs/mycert.pem janus.cfg:;cert_key = /opt/janus/share/janus/certs/mycert.key janus.cfg:cert_pem = /etc/letsencrypt/live/jangouts.opensourceecology.org/fullchain.pem; janus.cfg:cert_key = /etc/letsencrypt/live/jangouts.opensourceecology.org/privkey.pem; janus.cfg:;cert_pwd = secretpassphrase janus.transport.http.cfg:;cert_pem = /opt/janus/share/janus/certs/mycert.pem janus.transport.http.cfg:;cert_key = /opt/janus/share/janus/certs/mycert.key janus.transport.http.cfg:cert_pem = /etc/letsencrypt/live/jangouts.opensourceecology.org/fullchain.pem; janus.transport.http.cfg:cert_key = /etc/letsencrypt/live/jangouts.opensourceecology.org/privkey.pem; janus.transport.http.cfg:;cert_pwd = secretpassphrase [root@ip-172-31-28-115 janus]#
- ...
- I'm still downloading the OSE Linux iso image (going on a few days of connecting, disconnecting, & re-continuing the `wget -c`). This sort of thing can lead to corruption, but we don't have any checksums published. Thus, I updated our OSE Linux wiki article, adding a TODO item for checksum files and cryptographic signature files of those checksum files, per the standard used by debian & ubuntu https://wiki.opensourceecology.org/wiki/OSE_Linux#Hashes_.26_Signatures
- I sent Christian an email about this request
- ...
- I checked the `tail -f` I left open on the /var/log/maillog since adding the IPv6 PTR = RDNS entry in attempt to resolve the bounced emails when sending to gmail users from our wiki. It appears that this issue has not re-occured since I started monitoring this 5 days ago
- there was one line that repeated many dozen times over those 5 days with 'status=bounced', and it looks like this; not sure if this is an issue or what caused it
Jul 13 06:20:10 hetzner2 postfix/local[6631]: D087E681EA6: to=<root@hetzner2.opensourceecology.org>, relay=local, delay=0.01, delays=0.01/0/0/0.01, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" )
- ugh, actually, that last entry was 4 days ago. So the tail stopped producing output after 1 day.
- I did a better test: grepping (or zgrepping, for the compressed files) for blocked (removing the unrelated issue as pointed out above). From this test, it's very clear that there was many issues with google bouncing our emails per day, but then (after my change), the issue ceased.
[root@hetzner2 log]# grep -i 'status=bounced' maillog-20180717 | grep -vi 'create user output file' [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180716.gz | grep -vi 'create user output file' [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180715.gz | grep -vi 'create user output file' [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180714.gz | grep -vi 'create user output file' [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180713.gz | grep -vi 'create user output file' [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180712.gz | grep -vi 'create user output file' [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180711.gz | grep -vi 'create user output file' Jul 10 19:26:05 hetzner2 postfix/smtp[19740]: 2E818681EA6: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c06::1a]:25, delay=0.55, delays=0.02/0/0.09/0.43, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c06::1a] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . y47-v6si14908216wrd.389 - gsmtp (in reply to end of DATA command)) Jul 10 19:26:05 hetzner2 postfix/smtp[19741]: 31112681DE7: to=<marcin@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c0b::1b]:25, delay=0.6, delays=0.02/0/0.08/0.49, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c0b::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . a9-v6si71749wme.40 - gsmtp (in reply to end of DATA command)) Jul 10 19:26:06 hetzner2 postfix/smtp[19740]: B3CB1681EA4: to=<noreply@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c06::1a]:25, delay=0.55, delays=0.01/0/0.05/0.48, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c06::1a] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . t9-v6si9165105wrq.111 - gsmtp (in reply to end of DATA command)) Jul 10 19:26:45 hetzner2 postfix/smtp[19740]: 324DB681EA6: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c0b::1b]:25, delay=0.26, delays=0.02/0/0.05/0.19, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c0b::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . a4-v6si49801wmc.231 - gsmtp (in reply to end of DATA command)) Jul 10 19:26:45 hetzner2 postfix/smtp[19741]: 718C1681DE9: to=<noreply@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c08::1b]:25, delay=0.2, delays=0.01/0/0.05/0.14, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c08::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . 89-v6si9030402wrl.156 - gsmtp (in reply to end of DATA command)) Jul 10 20:09:22 hetzner2 postfix/smtp[25087]: 1C819681EA4: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c07::1a]:25, delay=0.37, delays=0.02/0/0.09/0.26, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c07::1a] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . o83-v6si104445wmo.206 - gsmtp (in reply to end of DATA command)) Jul 10 20:21:03 hetzner2 postfix/smtp[27148]: 9622F681EA6: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c06::1b]:25, delay=0.49, delays=0.02/0/0.06/0.41, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c06::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . g2-v6si121400wmc.195 - gsmtp (in reply to end of DATA command)) Jul 10 20:21:03 hetzner2 postfix/smtp[27149]: 17F89681DE9: to=<noreply@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c00::1b]:25, delay=0.17, delays=0.01/0/0.05/0.11, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c00::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . o16-v6si139472wme.45 - gsmtp (in reply to end of DATA command)) [root@hetzner2 log]#
- going back further, days we see that this did happen 'nearly' every day between 0-51 times per day
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180711.gz | grep -vi 'create user output file' | wc -l 8 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180710.gz | grep -vi 'create user output file' | wc -l 19 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180709.gz | grep -vi 'create user output file' | wc -l 9 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180708.gz | grep -vi 'create user output file' | wc -l 20 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180707.gz | grep -vi 'create user output file' | wc -l 51 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180706.gz | grep -vi 'create user output file' | wc -l 27 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180705.gz | grep -vi 'create user output file' | wc -l 10 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180704.gz | grep -vi 'create user output file' | wc -l 13 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180703.gz | grep -vi 'create user output file' | wc -l 15 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180702.gz | grep -vi 'create user output file' | wc -l 0 [root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180701.gz | grep -vi 'create user output file' | wc -l 0 [root@hetzner2 log]#
- a better test of all the files shows 0-101 of these errors happening per day over the past 31 days. 11/31 days had no errors.
[root@hetzner2 log]# date Tue Jul 17 21:10:42 UTC 2018 [root@hetzner2 log]# pwd /var/log [root@hetzner2 log]# for file in $(ls -1 maillog*.gz); do echo $file; zcat $file | grep -i 'status=bounced' | grep -vi 'create user output file' | wc -l; done maillog-20180616.gz 87 maillog-20180617.gz 57 maillog-20180618.gz 14 maillog-20180619.gz 37 maillog-20180620.gz 0 maillog-20180621.gz 0 maillog-20180622.gz 4 maillog-20180623.gz 48 maillog-20180624.gz 68 maillog-20180625.gz 5 maillog-20180626.gz 0 maillog-20180627.gz 20 maillog-20180628.gz 101 maillog-20180629.gz 9 maillog-20180630.gz 0 maillog-20180701.gz 0 maillog-20180702.gz 0 maillog-20180703.gz 15 maillog-20180704.gz 13 maillog-20180705.gz 10 maillog-20180706.gz 27 maillog-20180707.gz 51 maillog-20180708.gz 20 maillog-20180709.gz 9 maillog-20180710.gz 19 maillog-20180711.gz 8 maillog-20180712.gz 0 maillog-20180713.gz 0 maillog-20180714.gz 0 maillog-20180715.gz 0 maillog-20180716.gz 0 [root@hetzner2 log]#
- this test should probably be re-done in 31 days. If it's all 0s, then I think we can say the problem has been resolved.
- ...
- I reset the backblaze@opensourceecology.org account's password, stored it to keepass, created an account on backblaze.com using this email address, and stored _that_ to keepass.
- I logged into backblaze, and the first fucking thing they say is that we need to associate the account with a phone #. The reason I use a shared email address is in case they want to use 2FA via email. 2FA via phone doesn't work between Marcin & Myself (or other sysadmins). Not to mention that 2FA over SMS is fucking stupidly insecure. And I don't have a fucking phone to being with! https://help.backblaze.com/hc/en-us/articles/218513467-Mobile-Phone-Verification-for-B2-
- I went to send Backblaze an email asking if I can use their service with TOTP instead of the fucking security theater of 2FA over sms, but I couldn't send it without first solving a captcha. Of course, I block all google domains in my browser, so I had to switch to an ephemeral vm just to use their account. God damn it, this is not a great first experience.
Hi, Our org is looking to switch from Glacier to B2 for our daily 15G of server backups, but I do not have a phone number. How can I use Backblaze B2? We use 2FA for most of our accounts where possible, but we use TOTP (time-based one time passwords), a very well accepted standard which is supported by many open source apps (ie: andOTP, FreeOTP, Google Authenticator, etc). In fact, 2FA over SMS (or email) is a horrible idea as it sends the token through an insecure medium. Indeed, it is trivial to intercept a message over SMS or email. I do not own a phone number. Moreover, we're a geographically dispersed team; if I registered this account with one of my coworker's phone numbers, how would I be able to login when their phone in Germany or rural Missouri or Austin has the token I need to login? Please let me know if it's possible for me to use this service for our backups without a phone number. Thank you, Michael Altfield Senior System Administrator PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7 70D2 AA3E DF71 60E2 D97B Open Source Ecology www.opensourceecology.org
- I read some scary things about limits to file extensions and low max file sizes on backblaze reviews. This applies to the desktop service only. The cloud (B2) service file size limit is 10T, which is well within our requirements https://www.backblaze.com/b2/docs/large_files.html
- like amazon, large files do need to be uploaded in parts. I avoided this api hell by using glacier.py. It does appear that the backblaze python cli tool only has a simple 'upload-file' function, which abstracts the multi-part upload api calls for you https://www.backblaze.com/b2/docs/quick_command_line.html
- backblaze got back to me stating that the phone number is required, but that we can switch to TOTP after registering a phone number. That works I guess, though their incompetence is still showing..
- I added my google hangouts number to the account, then changed 2FA to disable 2FA via sms & enable 2FA via TOTP. I added the totp secret key to keepass with 10 pre-gen'd tokens.
- fucking hell. I went to test it by logging into my account from another vm. It let me login without entering any tokens! I confirmed that 2fa settings say to use it "Every time I sign in". Fail.
- not only does that show Backblaze's incompetence by totally bypassing our 2FA settings, but it also means that I can't validate that my 2FA token on my phone (and backed-up to keepass) is valid. So as soon as I log off on both VMs, and try to login later when it _does_ require me to enter a token, I could have locked myself out.
- the only reason I'm still considering using BackBlaze is because all of our data shipped to them will be encrypted. Therefore, they can be security dumdums (but advertise to be gurus; fucking marketing) and it doesn't affect us (this is very important data, though). As long as they stay durable (they claim 99.999999999%) https://www.backblaze.com/security.html
- actually, if I'm running their shitty code as root, that's an issue. It may be a good idea to have the upload be performed by a lower-privileged backblaze user after encryption by root.
- I thought maybe I could get past this with duplicity, but it turns out that duplicity just uses the b2 api. https://bazaar.launchpad.net/~duplicity-team/duplicity/0.8-series/view/head:/duplicity/backends/b2backend.py
- duplicity has included b2 support since 0.7.12, and I confirmed that this is already in the yum repo
[maltfield@hetzner2 ~]$ sudo yum install duplicity ... ============================================================================================================================== Package Arch Version Repository Size ============================================================================================================================== Installing: duplicity x86_64 0.7.17-1.el7 epel 553 k Installing for dependencies: PyYAML x86_64 3.10-11.el7 base 153 k librsync x86_64 1.0.0-1.el7 epel 53 k libyaml x86_64 0.1.4-11.el7_0 base 55 k ncftp x86_64 2:3.2.5-7.el7 epel 340 k pexpect noarch 2.3-11.el7 base 142 k python-GnuPGInterface noarch 0.3.2-11.el7 epel 26 k python-fasteners noarch 0.9.0-2.el7 epel 35 k python-httplib2 noarch 0.9.2-1.el7 extras 115 k python-lockfile noarch 1:0.9.1-4.el7.centos extras 28 k python-paramiko noarch 2.1.1-4.el7 extras 268 k python2-PyDrive noarch 1.3.1-3.el7 epel 49 k python2-gflags noarch 2.0-5.el7 epel 61 k python2-google-api-client noarch 1.6.3-1.el7 epel 87 k python2-keyring noarch 5.0-3.el7 epel 115 k python2-oauth2client noarch 4.0.0-2.el7 epel 144 k python2-pyasn1-modules noarch 0.1.9-7.el7 base 59 k python2-uritemplate noarch 3.0.0-1.el7 epel 18 k Transaction Summary ============================================================================================================================== Install 1 Package (+17 Dependent packages) Total download size: 2.2 M Installed size: 9.4 M ... [maltfield@hetzner2 ~]$
- did some more research about duplicity + b2
- https://help.backblaze.com/hc/en-us/articles/115001518354-How-to-configure-Backblaze-B2-with-Duplicity-on-Linux
- https://www.loganmarchione.com/2017/07/backblaze-b2-backup-setup/
- the benefit here is that (as the second link above shows) we can have duplicity make small incremental backups all month, then just do a full backup after 30 days. It also does encryption for you using gpg.
- the disadvantage is dependency on duplicity. If someone doesn't know duplicity, they'll probably just login to the backblaze wui (where you can download files directly using your web browser), but this data (incrementals anyway) will be useless unless they use the duplicity tool
- I would be less concerned about this if we could make the backups from the 1st of every month be a complete backup.
- if we use duplicity with incrementals, then it necessarily uses a lot of temp space on the server; this is even listed as a common complaint in there 2-question FAQ http://duplicity.nongnu.org/FAQ.html
- another disadvantage is that duplicity (which does both the encryption & upload) would necessarily execute the b2 api code as root. But if I hacked it into my script, I could encrypt as root then call the backblaze cli tool as an underprivliged user
- spent some time reading up on the duplicity manual http://duplicity.nongnu.org/duplicity.1.html
- I do think it's better to reuse a popular tool like duplicity rather than recreating the wheel here, so I'll give it a try
[root@hetzner2 ~]# yum install duplicity ... Complete! [root@hetzner2 ~]#
- I created a bucket named 'ose-server-backups'
- expecting failure, I tried to upload to b2
[root@hetzner2 sync]# duplicity hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg b2://<obfuscated>:<obfuscated>@ose-server-backups/ BackendException: B2 backend requires B2 Python APIs (pip install b2) [root@hetzner2 sync]# pip install b2 -bash: pip: command not found [root@hetzner2 sync]#
- I installed pip (python2-pip)
[root@hetzner2 sync]# yum install python-pip Loaded plugins: fastestmirror, replace Loading mirror speeds from cached hostfile * base: mirror.wiuwiu.de * epel: mirror.wiuwiu.de * extras: mirror.wiuwiu.de * updates: mirror.wiuwiu.de * webtatic: uk.repo.webtatic.com Resolving Dependencies --> Running transaction check ---> Package python2-pip.noarch 0:8.1.2-6.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================== Package Arch Version Repository Size ============================================================================================================================== Installing: python2-pip noarch 8.1.2-6.el7 epel 1.7 M Transaction Summary ============================================================================================================================== Install 1 Package Total download size: 1.7 M Installed size: 7.2 M Is this ok [y/d/N]: y Downloading packages: python2-pip-8.1.2-6.el7.noarch.rpm | 1.7 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : python2-pip-8.1.2-6.el7.noarch 1/1 Verifying : python2-pip-8.1.2-6.el7.noarch 1/1 Installed: python2-pip.noarch 0:8.1.2-6.el7 Complete! [root@hetzner2 sync]#
- unsurprisingly, shitty pip failed
[root@hetzner2 sync]# pip install b2 Collecting b2 Downloading https://files.pythonhosted.org/packages/c2/1c/c52039c3c3dcc3d1a9725cef7523c4b50abbb967068a1ea40f28cd9978f5/b2-1.2.0.tar.gz (99kB) 100% || 102kB 4.5MB/s Complete output from command python setup.py egg_info: setuptools 20.2 or later is required. To fix, try running: pip install "setuptools>=20.2" ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-7XvsdJ/b2/ You are using pip version 8.1.2, however version 10.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. [root@hetzner2 sync]#
- the upgrade worked
[root@hetzner2 sync]# pip install --upgrade pip Collecting pip Downloading https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl (1.3MB) 100% || 1.3MB 1.2MB/s Installing collected packages: pip Found existing installation: pip 8.1.2 Uninstalling pip-8.1.2: Successfully uninstalled pip-8.1.2 Successfully installed pip-10.0.1 [root@hetzner2 sync]#
- but it still failed again
[root@hetzner2 sync]# pip install b2 Collecting b2 Using cached https://files.pythonhosted.org/packages/c2/1c/c52039c3c3dcc3d1a9725cef7523c4b50abbb967068a1ea40f28cd9978f5/b2-1.2.0.tar.gz Complete output from command python setup.py egg_info: setuptools 20.2 or later is required. To fix, try running: pip install "setuptools>=20.2" ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-mCAFuT/b2/ [root@hetzner2 sync]#
- preferring the system's package manager over pip, I tried to install setuptools, but it was already installed
[root@hetzner2 sync]# yum install python-setuptools Loaded plugins: fastestmirror, replace Loading mirror speeds from cached hostfile * base: mirror.wiuwiu.de * epel: mirror.wiuwiu.de * extras: mirror.wiuwiu.de * updates: mirror.wiuwiu.de * webtatic: uk.repo.webtatic.com Package python-setuptools-0.9.8-7.el7.noarch already installed and latest version Nothing to do [root@hetzner2 sync]#
- biting my lip in anxious fear, I proceeded with the install of setuptools from pip
[root@hetzner2 sync]# pip install "setuptools>=20.2" Collecting setuptools>=20.2 Downloading https://files.pythonhosted.org/packages/ff/f4/385715ccc461885f3cedf57a41ae3c12b5fec3f35cce4c8706b1a112a133/setuptools-40.0.0-py2.py3-none-any.whl (567kB) 100% || 573kB 11.7MB/s Installing collected packages: setuptools Found existing installation: setuptools 0.9.8 Uninstalling setuptools-0.9.8: Successfully uninstalled setuptools-0.9.8 Successfully installed setuptools-40.0.0 [root@hetzner2 sync]#
- this made it further, but still failed
[root@hetzner2 sync]# pip install b2 Collecting b2 Using cached https://files.pythonhosted.org/packages/c2/1c/c52039c3c3dcc3d1a9725cef7523c4b50abbb967068a1ea40f28cd9978f5/b2-1.2.0.tar.gz Collecting arrow<0.12.1,>=0.8.0 (from b2) Downloading https://files.pythonhosted.org/packages/90/48/7ecfce4f2830f59dfacbb2b5a31e3ff1112b731a413724be40f57faa4450/arrow-0.12.0.tar.gz (89kB) 100% || 92kB 2.3MB/s Collecting logfury>=0.1.2 (from b2) Downloading https://files.pythonhosted.org/packages/55/71/c70df1ef41721b554c91982ebde423a5cf594261aa5132e39ade9196fa3b/logfury-0.1.2-py2.py3-none-any.whl Collecting requests>=2.9.1 (from b2) Downloading https://files.pythonhosted.org/packages/65/47/7e02164a2a3db50ed6d8a6ab1d6d60b69c4c3fdf57a284257925dfc12bda/requests-2.19.1-py2.py3-none-any.whl (91kB) 100% || 92kB 3.4MB/s Collecting six>=1.10 (from b2) Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl Collecting tqdm>=4.5.0 (from b2) Downloading https://files.pythonhosted.org/packages/93/24/6ab1df969db228aed36a648a8959d1027099ce45fad67532b9673d533318/tqdm-4.23.4-py2.py3-none-any.whl (42kB) 100% || 51kB 6.9MB/s Collecting futures>=3.0.5 (from b2) Downloading https://files.pythonhosted.org/packages/2d/99/b2c4e9d5a30f6471e410a146232b4118e697fa3ffc06d6a65efde84debd0/futures-3.2.0-py2-none-any.whl Collecting python-dateutil (from arrow<0.12.1,>=0.8.0->b2) Downloading https://files.pythonhosted.org/packages/cf/f5/af2b09c957ace60dcfac112b669c45c8c97e32f94aa8b56da4c6d1682825/python_dateutil-2.7.3-py2.py3-none-any.whl (211kB) 100% || 215kB 4.4MB/s Collecting backports.functools_lru_cache==1.2.1 (from arrow<0.12.1,>=0.8.0->b2) Downloading https://files.pythonhosted.org/packages/d1/0e/c473e3c37c34fea699d85d5b9e3caf712813c4cd2dcc0a5a64ec2a6867f7/backports.functools_lru_cache-1.2.1-py2.py3-none-any.whl Collecting funcsigs (from logfury>=0.1.2->b2) Downloading https://files.pythonhosted.org/packages/69/cb/f5be453359271714c01b9bd06126eaf2e368f1fddfff30818754b5ac2328/funcsigs-1.0.2-py2.py3-none-any.whl Collecting certifi>=2017.4.17 (from requests>=2.9.1->b2) Downloading https://files.pythonhosted.org/packages/7c/e6/92ad559b7192d846975fc916b65f667c7b8c3a32bea7372340bfe9a15fa5/certifi-2018.4.16-py2.py3-none-any.whl (150kB) 100% || 153kB 5.4MB/s Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.9.1->b2) Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB) 100% || 143kB 5.5MB/s Collecting urllib3<1.24,>=1.21.1 (from requests>=2.9.1->b2) Downloading https://files.pythonhosted.org/packages/bd/c9/6fdd990019071a4a32a5e7cb78a1d92c53851ef4f56f62a3486e6a7d8ffb/urllib3-1.23-py2.py3-none-any.whl (133kB) 100% || 143kB 5.9MB/s Collecting idna<2.8,>=2.5 (from requests>=2.9.1->b2) Downloading https://files.pythonhosted.org/packages/4b/2a/0276479a4b3caeb8a8c1af2f8e4355746a97fab05a372e4a2c6a6b876165/idna-2.7-py2.py3-none-any.whl (58kB) 100% || 61kB 7.4MB/s Installing collected packages: six, python-dateutil, backports.functools-lru-cache, arrow, funcsigs, logfury, certifi, chardet, urllib3, idna, requests, tqdm, futures, b2 Found existing installation: six 1.9.0 Uninstalling six-1.9.0: Successfully uninstalled six-1.9.0 Running setup.py install for arrow ... done Found existing installation: chardet 2.2.1 Uninstalling chardet-2.2.1: Successfully uninstalled chardet-2.2.1 Found existing installation: urllib3 1.10.2 Uninstalling urllib3-1.10.2: Successfully uninstalled urllib3-1.10.2 Found existing installation: idna 2.4 Uninstalling idna-2.4: Successfully uninstalled idna-2.4 Found existing installation: requests 2.6.0 Cannot uninstall 'requests'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. [root@hetzner2 sync]#
- found the backend before it was integrated into duplicity https://github.com/matthewbentley/duplicity_b2
- I decided to just install from git (meh) https://www.backblaze.com/b2/docs/quick_command_line.html
[root@hetzner2 sandbox]# git clone https://github.com/Backblaze/B2_Command_Line_Tool.git Cloning into 'B2_Command_Line_Tool'... remote: Counting objects: 5605, done. remote: Compressing objects: 100% (134/134), done. remote: Total 5605 (delta 191), reused 188 (delta 129), pack-reused 5341 Receiving objects: 100% (5605/5605), 1.45 MiB | 2.36 MiB/s, done. Resolving deltas: 100% (4005/4005), done. [root@hetzner2 sandbox]# cd B2_Command_Line_Tool/ [root@hetzner2 B2_Command_Line_Tool]# python setup.py install ... Finished processing dependencies for b2==1.2.1 [root@hetzner2 B2_Command_Line_Tool]#
- that appears to have worked, but now it's asking me for a passphrase for pgp symmetric encryption. I want to use a key file as input, but all the pgp file options appear to be for asymmetric encryption via a public key specified by argument. that's not what I want!
[root@hetzner2 sandbox]# duplicity /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg b2://<obfuscate>:<obfuscated>@ose-server-backups/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: none GnuPG passphrase:
Mon Jul 16, 2018
- hetzner responded to my query about the absolute paths for our addon domains. They concluded that none of the directories actually exist anymore. That's not good for data retention, but at least we don't have to worry about backing them up.
Dear Mr Altfield The absolute path for your accounts public_html is: /usr/www/users/osemain This is displayed as "/" in konsoleH's Document Root. All Addon-Domains are listed below this path, assuming "/addon" as Document root, this would be "/usr/www/users/osemain/addon" I regret to say, none of your document roots are existing anymore, you may deleted them. So please re-upload your data and re-set your document roots. If we can be of any further assistance, please let us know. Mit freundlichen Grüßen / Kind regards Emanuel Wiesner
- we still do probably want to backup the home directories of these addon domains. logging into konsoleH & clicking on each addon domain, then Services -> Access Details -> Login gave username & passwords for login
- the 'addontest.opensourceecology.org' addon domain's user was 'addon'. It had a password, but the default shell was /bin/false. Therefore, ssh-ing in did not work on this account
user@ose:~$ ssh addon@dedi978.your-server.deaddon@dedi978.your-server.de's password: bin/false: No such file or directory Connection to dedi978.your-server.de closed. user@ose:~$
- the 'holla.opensourceecology.org' addon domain's user was 'oseholla'. This account's shell was also /bin/false.
- the 'irc.opensourceecology.org' addon domain's user was 'oseirc'. This account's shell was also /bin/false.
- the 'opensourcwarehouse.org' addon domain's user was 'openswh'. This account's shell was also /bin/false.
- the 'sandbox.opensourceecology.org' addon domain's user was 'sandbox'. This account's shell was also /bin/false.
- the 'survey.opensourceecology.org' addon domain's user was 'osesurv'. This account's shell is /bin/bash. It's the only one that I can actually log into, and it's the one that I already backed-up
- I scp'd the home dir backup of osesurv that I created July 11th to hetzner2:/var/tmp/deprecateHetzner1/osesurv/final_backup_before_hetzner1_deprecation_osesurv_20180711-165315_home.tar.bz2
- now I have all the backups of all home dirs, web roots, and databases stashed on hetzner2 ready for encrytion, metadata/cataloging & upload to glacier.
[root@hetzner2 deprecateHetzner1]# date Mon Jul 16 18:51:55 UTC 2018 [root@hetzner2 deprecateHetzner1]# pwd /var/tmp/deprecateHetzner1 [root@hetzner2 deprecateHetzner1]# ls -lahR .: total 32K drwx------ 8 root root 4.0K Jul 16 18:29 . drwxrwxrwt. 52 root root 4.0K Jul 15 02:08 .. drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 18:13 microft drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 18:04 oseblog drwxrwxr-x 2 maltfield maltfield 4.0K Jul 6 23:38 osecivi drwxrwxr-x 2 maltfield maltfield 4.0K Jul 6 23:47 oseforum drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 17:48 osemain drwxr-xr-x 2 root root 4.0K Jul 16 18:38 osesurv ./microft: total 6.2G drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 18:13 . drwx------ 8 root root 4.0K Jul 16 18:29 .. -rw-r--r-- 1 maltfield maltfield 1.4G Jul 11 18:12 final_backup_before_hetzner1_deprecation_microft_20180706-234228_home.tar.bz2 -rw-r--r-- 1 maltfield maltfield 523K Jul 11 18:12 final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_db2.20180706-234228.sql.bz2 -rw-r--r-- 1 maltfield maltfield 1.3M Jul 11 18:12 final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_drupal1.20180706-234228.sql.bz2 -rw-r--r-- 1 maltfield maltfield 3.3G Jul 11 18:13 final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_wiki.20180706-234228.sql.bz2 -rw-r--r-- 1 maltfield maltfield 1.7G Jul 11 18:14 final_backup_before_hetzner1_deprecation_microft_20180706-234228_webroot.tar.bz2 ./oseblog: total 4.4G drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 18:04 . drwx------ 8 root root 4.0K Jul 16 18:29 .. -rw-r--r-- 1 maltfield maltfield 1.2G Jul 11 18:01 final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_home.tar.bz2 -rw-r--r-- 1 maltfield maltfield 135M Jul 11 18:01 final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_mysqldump-oseblog.20180706-234052.sql.bz2 -rw-r--r-- 1 maltfield maltfield 3.1G Jul 11 18:02 final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_webroot.tar.bz2 ./osecivi: total 15M drwxrwxr-x 2 maltfield maltfield 4.0K Jul 6 23:38 . drwx------ 8 root root 4.0K Jul 16 18:29 .. -rw-r--r-- 1 maltfield maltfield 2.3M Jul 6 23:38 final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2 -rw-r--r-- 1 maltfield maltfield 1.1M Jul 6 23:38 final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2 -rw-r--r-- 1 maltfield maltfield 173K Jul 6 23:38 final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2 -rw-r--r-- 1 maltfield maltfield 12M Jul 6 23:38 final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2 ./oseforum: total 955M drwxrwxr-x 2 maltfield maltfield 4.0K Jul 6 23:47 . drwx------ 8 root root 4.0K Jul 16 18:29 .. -rw-r--r-- 1 maltfield maltfield 853M Jul 6 23:47 final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 -rw-r--r-- 1 maltfield maltfield 46M Jul 6 23:47 final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 -rw-r--r-- 1 maltfield maltfield 57M Jul 6 23:47 final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2 ./osemain: total 3.3G drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 17:48 . drwx------ 8 root root 4.0K Jul 16 18:29 .. -rw-r--r-- 1 maltfield maltfield 2.9G Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_home.tar.bz2 -rw-r--r-- 1 maltfield maltfield 1.2M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-openswh.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 187K Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_fef.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 157K Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osesurv.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 14 Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_website.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 203M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osewiki.20180706-224656.sql.bz2 -rw-r--r-- 1 maltfield maltfield 212M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_webroot.tar.bz2 ./osesurv: total 260K drwxr-xr-x 2 root root 4.0K Jul 16 18:38 . drwx------ 8 root root 4.0K Jul 16 18:29 .. -rw-r--r-- 1 maltfield maltfield 252K Jul 16 18:37 final_backup_before_hetzner1_deprecation_osesurv_20180711-165315_home.tar.bz2 [root@hetzner2 deprecateHetzner1]# du -sh 15G . [root@hetzner2 deprecateHetzner1]#
- I clicked around on the hetzner kosoleH wui to see if I could configure a firewall or shutdown the server. I found nothing. So I created a new support request with hetzner asking how I could configure a firewall that would drop all incoming & outgoing tcp & upp traffic except [a] ESTABLISHED/RELATED connections and [b] tcp traffic destined for port 222 (ssh).
- I re-purposed (with heavy modifications) the 'uploadToGlacier.sh' script as was used on dreamhost's hancock. It's now at hetzner2:/root/bin/uploadToGlacier.sh. First I'll run it for the backupDirs="var/tmp/deprecateHetzner1/osesurv". If that goes fine, then I'll run it with all the other dirs listed. This script should encrypt the archives, create a metadata fileList (also encrypted) and upload both to glacier for each of the "backupDirs" I list.
- the first upload to glacier appears to have been successful. I'll wait until I can sync the archive list in the glacier vault as a test to ensure that it absolutely worked before I use the script to upload the remaining archives later this week
[root@hetzner2 bin]# date Tue Jul 17 00:35:46 UTC 2018 [root@hetzner2 bin]# ./uploadToGlacier.sh + backupDirs=/var/tmp/deprecateHetzner1/osesurv + syncDir=/var/tmp/deprecateHetzner1/sync/ + encryptionKeyFilePath=/root/backups/ose-backups-cron.key + export AWS_ACCESS_KEY_ID=<obfuscated> + AWS_ACCESS_KEY_ID=<obfuscated> + export AWS_SECRET_ACCESS_KEY=<obfuscated> + AWS_SECRET_ACCESS_KEY=<obfuscated> ++ echo /var/tmp/deprecateHetzner1/osesurv + for dir in '$(echo $backupDirs)' ++ basename /var/tmp/deprecateHetzner1/osesurv + archiveName=hetzner1_final_backup_before_hetzner1_deprecation_osesurv ++ date -u --rfc-3339=seconds + timestamp='2018-07-17 00:35:48+00:00' + fileListFilePath=/var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt + archiveFilePath=/var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar + echo ================================================================================ + echo 'This file is metadata for the archive '\hetzner1_final_backup_before_hetzner1_deprecation_osesurv'\. In it, we list all the files included in the compressed/encrypted archive (produced using '\ls -lahR /var/tmp/deprecateHetzner1/osesurv'\), including the files within the tarballs within the archive (produced using '\find /var/tmp/deprecateHetzner1/osesurv -type f -exec tar -tvf '\{}'\ \; '\)' + echo '' + echo 'This archive was made as a backup of the files and databases that were previously used on hetnzer1 prior to migrating to hetzner2 in 2018. Before we cancelled our contract for hetzner1, this backup was made & put on glacier for long-term storage in-case we learned later that we missed some content on the migration. Form more information, please see the following link: echo ' aws glacier.py letsencrypt uploadToGlacier.sh uploadToGlacier.sh.20180716.orig 'https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation echo echo >> /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt echo ' - Michael Altfield Note: this file was generated at 2018-07-17 '00:35:48+00:00 >> /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt echo ================================================================================ >> /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt echo ############################# >> /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt echo #' 'ls -lahR' output follows ./uploadToGlacier.sh: line 43: maltfield@opensourceecology.org: No such file or directory + echo '#############################' + ls -lahR /var/tmp/deprecateHetzner1/osesurv + echo ================================================================================ + echo '############################' + echo '# tarball contents follows #' + echo '############################' + find /var/tmp/deprecateHetzner1/osesurv -type f -exec tar -tvf '{}' ';' + echo ================================================================================ + bzip2 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt + gpg --symmetric --cipher-algo aes --batch --passphrase-file /root/backups/ose-backups-cron.key /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2 + rm /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt rm: cannot remove ‘/var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt’: No such file or directory + /root/bin/glacier.py --region us-west-2 archive upload deleteMeIn2020 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg + 0 -eq 0 + rm -f /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg + tar -cvf /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar /var/tmp/deprecateHetzner1/osesurv/ tar: Removing leading `/' from member names /var/tmp/deprecateHetzner1/osesurv/ /var/tmp/deprecateHetzner1/osesurv/final_backup_before_hetzner1_deprecation_osesurv_20180711-165315_home.tar.bz2 + gpg --symmetric --cipher-algo aes --batch --passphrase-file /root/backups/ose-backups-cron.key /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar + rm /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar + /root/bin/glacier.py --region us-west-2 archive upload deleteMeIn2020 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg + 0 -eq 0 + rm -f /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg [root@hetzner2 bin]#
- I initiated a `--wait`-ing vault sync with glacier.py
[root@hetzner2 sync]# glacier.py --region us-west-2 vault sync --wait deleteMeIn2020
- ..
- I did some digging into reviews comparing cloud storage providers against Backblaze B2. Namely AWS Glacier & s3. The issue with glacier & s3 is that they impose a minimum storage time. That totally breaks our delete-daily-uploads-after-3-days model, and makes a _huge_ difference in GB stored per month and, therefore, the annual bill. For amazon, we'd be charged for having 30 days * 15G = 450 GB minimum. For Backblaze, we'd be able to delete after 3 days, so we only have ~45G stored. Because Backblaze is only slightly more expensive than glacier, that could be a ~8-10x difference in cost!
- unfortunately, many of the reviews I found on Backblaze talked about them as an "unlimited storage for a single computer" service. I'm not sure this is the product that would apply to our server..
- https://www.cloudwards.net/azure-vs-amazon-s3-vs-google-vs-backblaze-b2/#one
- this one listed Amazon over B2 because of their security. We don't give a shit about their security because we're just encrypting it before uploading it anyway! TNO! B2 beat AWS regarding cost, with no major cons that I could see on this review.
- backblaze does have a CLI tool, but everything they offer appears to target windows & mac. There is no mention of linux support, but the github shows it's a python tool. Maybe it works in linux? https://github.com/Backblaze/B2_Command_Line_Tool
- ah, backblaze does reference linux support here https://help.backblaze.com/hc/en-us/articles/217664628-How-does-Backblaze-support-Linux-Users-
- they say duplicity supports backblaze. That's great! I wanted to use duplicity before. I'll have to look into this option more.
- I logged into the G Suite Admin = Gapps = https://admin.google.com
- confirmed that I now have more premissons since Marcin made me a Super Admin here
- I guess we're grandfathered-into a free plan (up to 200 licenses; currently we're at 33)
- I created a new user backblaze@opensourceecology.org. We'll use this to create a new account for backblaze.
Sat Jul 14, 2018
- Marcin granted me super admin rights on our Google Suite account (so I can whitelist our IPs for Gmail); I haven't tested this access yet
- Marcin mentioned that the STL files I produced for 3d printing parts from the Prusa lacked recessed nut catchers. We compared screenshots and his freecad showed the recesses where the nuts go while mine did not. This is a strange discrepancy which Marcin said should be resolved by everyone running oselinux. I'll have to setup an HVM for OSE Linux. I can't use it for 99% of my daily tasks as it lacks Persistence currently.
- I had several back-and-forth emails with Chris about enabling Persistence in OSE Linux. Progress is being made. Code is in github & documented on our wiki here https://wiki.opensourceecology.org/wiki/OSE_Linux_Persistence
- Hetzner got back to me about the addon domains document roots. They simply told me to check konsoleH. Indeed, the "Document root" _is_ listed when you click on each addon domain, but it's a useless string. I emailed back to them asking for them to either tell us the aboslute path to each of our 6x addon domains or for them to send me the entire contents of the /etc/httpd directory so I could figure it out myself (again, I don't have root on this old server)
Hi Bastian, Can you please tell me the absolute path of each of our addon domains? It looks like we have 6x addon domains under our 'osemain' account. As you suggested, I clicked on each of the domains in konsoleH and checked the string listed under their "Document Root" in konsleH. Here's the results: addontest.opensourceecology.org /addon-domains/addontest/ holla.opensourceecology.org /addon-domains/holla irc.opensourceecology.org /addon-domains/irc/ opensourcewarehouse.org /archive/addon-domains/opensou… sandbox.opensourceecology.org /addon-domains/sandbox survey.opensourceecology.org /addon-domains/survey/ Unfortunately, none of these paths are absolute paths. Therefore, they are ambiguous. Assuming they are merely underneath the master account's docroot, I'd assume these document root directories would be relative to '/usr/home/osemain/public_html/'. However, most of these directories do not exist in that directory. osemain@dedi978:~/public_html$ date Sat Jul 14 22:54:52 CEST 2018 osemain@dedi978:~/public_html$ pwd /usr/home/osemain/public_html osemain@dedi978:~/public_html$ ls -lah total 32K drwx---r-x 5 osemain users 4.0K May 25 17:42 . drwxr-x--x 14 root root 4.0K Jul 12 14:29 .. drwxr-xr-x 13 osemain osemain 4.0K Jun 20 2017 archive -rwxr-xr-x 1 osemain osemain 1.9K Mar 1 20:31 .htaccess drwxr-xr-x 2 osemain osemain 4.0K Sep 17 2017 logs drwxr-xr-x 14 osemain osemain 4.0K Mar 31 2015 mediawiki-1.24.2.extra -rw-r--r-- 1 osemain osemain 526 Jun 19 2015 old.html -rw-r--r-- 1 osemain osemain 883 Jun 19 2015 oldu.html.done osemain@dedi978:~/public_html$ There is no 'addon-domains' directory here. The only directory that matches the "Document root"s extracted from konsoleH as listed above is for 'opensourcewarehouse.org', which is listed as being inside a directory 'archive'. Unfortunately, I can't even see what that directory exactly is. The ellipsis (...) in "/archive/addon-domains/opensou…" is literally in the string that konsoleH gave me. Can you please provide for me the _absolute_ path of the document roots for 6x the vhosts listed as "addon-domains" listed above? If you could attach the contents of the /etc/httpd directory, that would also be extremely helpful in figuring this information out myself. Please provide for me the absolute paths of the 6x document roots of the "addon-domains". Thank you, Michael Altfield Senior System Administrator PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7 70D2 AA3E DF71 60E2 D97B Open Source Ecology www.opensourceecology.org
- Emailed Marcin about All Power Labs, a biomass generator company based in Berkeley & added a wiki article about them https://wiki.opensourceecology.org/wiki/AllPowerLabs
Thr Jul 12, 2018
- hetzner got back to me about adding the PTR = RDNS entry. They say I can self-service this request via robot "under the tab IP...click on the small plus symbol beside of the IPv6 subnet."
You set the RDNS entry yourself via robot. You can do it directly at the server under the tab IP. Please click on the small plus symbol beside of the IPv6 subnet. Best regards Ralf Sager
- I found it: After logging in, click "Servers" (under the "Main Functions" header on the left), then clock on our server, then click the "IPs" tab (it was the first = default tab). Indeed, there is a very small plus symbol to the left of our ipv6 subnet = " 2a01:4f8:172:209e:: / 64". Clicking on that plus symbol opens a simple form asking for an "IP Address"+ "Reverse DNS entry".
- since we have a whole ipv6 subnet, it appears that we can have multiple entries here. I entered "2a01:4f8:172:209e::2" for the ip address (as this was what google reported to us) and "opensourceecology.org" for the "Reverse DNS entry".
- interestingly, there were no RDNS entries for the ipv4 addresses above. I set those to 'opensourceecology.org' as well.
- it worked immediately!
user@personal:~$ dig +short -x "2a01:4f8:172:209e::2" opensourceecology.org. user@personal:~$
- I emailed Ralf at hetzner back, asking if this self-servicability of setting the RDNS = PTR address was just as trivial for hetzner cloud nodes as it is for hetzner dedicated barmetal servers
- here's the whole PTR = RDS response using dig on our ipv6 address
user@personal:~$ dig -x "2a01:4f8:172:209e::2" ; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 2a01:4f8:172:209e::2 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7215 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 7 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. IN PTR ;; ANSWER SECTION: 2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. 5937 IN PTR opensourceecology.org. ;; AUTHORITY SECTION: 8.f.4.0.1.0.a.2.ip6.arpa. 5937 IN NS ns1.your-server.de. 8.f.4.0.1.0.a.2.ip6.arpa. 5937 IN NS ns.second-ns.com. 8.f.4.0.1.0.a.2.ip6.arpa. 5937 IN NS ns3.second-ns.de. ;; ADDITIONAL SECTION: ns.second-ns.com. 5241 IN A 213.239.204.242 ns.second-ns.com. 111071 IN AAAA 2a01:4f8:0:a101::b:1 ns1.your-server.de. 84441 IN A 213.133.106.251 ns1.your-server.de. 24671 IN AAAA 2a01:4f8:d0a:2006::2 ns3.second-ns.de. 24672 IN A 193.47.99.4 ns3.second-ns.de. 24671 IN AAAA 2001:67c:192c::add:b3 ;; Query time: 3 msec ;; SERVER: 10.137.2.1#53(10.137.2.1) ;; WHEN: Thu Jul 12 13:51:54 EDT 2018 ;; MSG SIZE rcvd: 358 user@personal:~$
- ...
- hetzner got back to me about the public_html directory being "permission denied" for the 'osesurv' user. They said that the document root is in the main user's public_html dir. I asked for them to tell me the absolute path to this dir, as I cannot check the apache config without root.
Dear Mr Altfield all website data is always saved in the main account. Addon domains only use files from the main domains public_html folder. If we can be of any further assistance, please let us know. Mit freundlichen Grüßen / Kind regards Jan Barnewold
Wed Jul 11, 2018
- hetzner got back to me, stating that I should go to "Services -> Login" in order to access the home directory of the 'osesurv' account (at /usr/home/osesurv)
Dear Mr Altfield every addon domain has it's own home directory. The login details can be found under Services -> Login. If you have any further questions, please feel free to contact me.
- I navigated to the addon domain in the hetzner wui konsoleh & to Services -> Login. I got a username & password. This let me ssh into the server as the user!
osesurv@dedi978:~$ date Wed Jul 11 18:32:36 CEST 2018 osesurv@dedi978:~$ pwd /usr/home/osesurv osesurv@dedi978:~$ whoami osesurv osesurv@dedi978:~$ ls -lah total 80K drwx--x--- 5 osesurv mail 4.0K Sep 21 2011 . drwxr-x--x 14 root root 4.0K Mar 9 2013 .. -rw-r--r-- 1 osesurv osesurv 220 Apr 10 2010 .bash_logout -rw-r--r-- 1 osesurv osesurv 3.2K Apr 10 2010 .bashrc -rw-r--r-- 1 osesurv osesurv 40 Sep 21 2011 .forward -rw-r----- 1 osesurv mail 2.2K Sep 21 2011 passwd.cdb -rw-r--r-- 1 osesurv osesurv 675 Apr 10 2010 .profile lrwxrwxrwx 1 root root 23 Sep 21 2011 public_html -> ../../www/users/osesurv -rw-r--r-- 1 osesurv osesurv 40 Sep 21 2011 .qmail -rw-r--r-- 1 osesurv osesurv 25 Sep 21 2011 .qmail-default drwxr-x--- 2 osesurv osesurv 4.0K Sep 21 2011 .tmp drwxrwxr-x 3 osesurv mail 4.0K Sep 21 2011 users drwxr-xr-x 2 osesurv osesurv 36K Mar 6 2014 www_logs osesurv@dedi978:~$
- none of these addon domains can have databases (I think), but it appears that I need to get all their home & web files
- began backing-up files on addon domain = osesurv
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*
- so ^ that failed. The home dir was accessible, but I'm getting a permission denied issue with the www dir linked to by public_html.
osesurv@dedi978:~$ # backup web root files osesurv@dedi978:~$ time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* tar: Removing leading `/' from member names tar: /usr/www/users/osesurv/*: Cannot stat: Permission denied tar: Exiting with failure status due to previous errors real 0m0.013s user 0m0.004s sys 0m0.000s osesurv@dedi978:~$
- I emailed hetzner back about this, asking how I can access this user's www dir
Hi Jan, Thank you. I was able to ssh in, and I was able to access the user's home directory. But I cannot access the user's www directory. user@ose:~$ ssh osesurv@dedi978.your-server.de osesurv@dedi978.your-server.de's password: Last login: Wed Jul 11 18:30:46 2018 from 108.160.67.63 osesurv@dedi978:~$ date Wed Jul 11 18:58:36 CEST 2018 osesurv@dedi978:~$ pwd /usr/home/osesurv osesurv@dedi978:~$ whoami osesurv osesurv@dedi978:~$ ls noBackup passwd.cdb public_html users www_logs osesurv@dedi978:~$ ls -lah public_html lrwxrwxrwx 1 root root 23 Sep 21 2011 public_html -> ../../www/users/osesurv osesurv@dedi978:~$ ls -lah public_html/ ls: cannot open directory public_html/: Permission denied osesurv@dedi978:~$ ls -lah ../../www/users/osesurv ls: cannot open directory ../../www/users/osesurv: Permission denied osesurv@dedi978:~$ ls -lah /usr/www/users/osesurv/ ls: cannot open directory /usr/www/users/osesurv/: Permission denied osesurv@dedi978:~$ Note that I also cannot access this directory from the 'osemain' user under which the addon domain 'osesurv' exists: osemain@dedi978:~$ date Wed Jul 11 19:02:03 CEST 2018 osemain@dedi978:~$ whoami osemain osemain@dedi978:~$ ls -lah /usr/www/users/osesurv ls: cannot access /usr/www/users/osesurv/.: Permission denied ls: cannot access /usr/www/users/osesurv/..: Permission denied total 0 d????????? ? ? ? ? ? . d????????? ? ? ? ? ? .. osemain@dedi978:~$ Can you please tell me how I can access the files in '/usr/www/users/osesurv/'? Is it possible to do so over ssh? Thank you, Michael Altfield Senior System Administrator PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7 70D2 AA3E DF71 60E2 D97B Open Source Ecology www.opensourceecology.org
- ...
- I went to check to see if the PTR dns entry was in-place for a reverse lookup of our ipv6 address that I created yesterday. Unfortunately, there's no change
user@personal:~$ dig +short -x 138.201.84.243 static.243.84.201.138.clients.your-server.de. user@personal:~$ dig +short -x "2a01:4f8:172:209e::2" user@personal:~$
- here's the long results
user@personal:~$ date Wed Jul 11 13:16:16 EDT 2018 user@personal:~$ dig -x 138.201.84.243 ; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 138.201.84.243 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35146 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 7 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;243.84.201.138.in-addr.arpa. IN PTR ;; ANSWER SECTION: 243.84.201.138.in-addr.arpa. 86108 IN PTR static.243.84.201.138.clients.your-server.de. ;; AUTHORITY SECTION: 84.201.138.in-addr.arpa. 86108 IN NS ns3.second-ns.de. 84.201.138.in-addr.arpa. 86108 IN NS ns1.your-server.de. 84.201.138.in-addr.arpa. 86108 IN NS ns.second-ns.com. ;; ADDITIONAL SECTION: ns.second-ns.com. 4381 IN A 213.239.204.242 ns.second-ns.com. 169981 IN AAAA 2a01:4f8:0:a101::b:1 ns1.your-server.de. 83581 IN A 213.133.106.251 ns1.your-server.de. 83581 IN AAAA 2a01:4f8:d0a:2006::2 ns3.second-ns.de. 83581 IN A 193.47.99.4 ns3.second-ns.de. 83581 IN AAAA 2001:67c:192c::add:b3 ;; Query time: 6 msec ;; SERVER: 10.137.2.1#53(10.137.2.1) ;; WHEN: Wed Jul 11 13:16:22 EDT 2018 ;; MSG SIZE rcvd: 322 user@personal:~$ dig -x "2a01:4f8:172:209e::2" ; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 2a01:4f8:172:209e::2 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57144 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. IN PTR ;; AUTHORITY SECTION: 8.f.4.0.1.0.a.2.ip6.arpa. 6890 IN SOA ns1.your-server.de. postmaster.your-server.de. 2018084081 86400 1800 3600000 86400 ;; Query time: 3 msec ;; SERVER: 10.137.2.1#53(10.137.2.1) ;; WHEN: Wed Jul 11 13:16:28 EDT 2018 ;; MSG SIZE rcvd: 166 user@personal:~$
- if we encounter these errors again, I think we'll have to contact hetzner to create these PTR entries for the ipv6 addresses. I don't think I have the ability to do this from our server or from our nameserver at cloudflare
- hetzner has an article on this issue, but they merely state to contact their support team https://hetzner.co.za/help-centre/domains/ptr/
- I went ahead and contacted hetzner (via our robot portal for hetnzer2--distinct from hetzner1's konsoleh) asking them to create the PTR record for our ipv6 addresses. And I asked them if this is something I could do myself or if it necessarily required a change on their end.
- note that this may not be a serviceable request for some types of accounts at hetzner, and it is a valid concern for moving from a dedicated barmetal server to other types of accounts, such as a cloud server. I documented this concern in the "looking forward" section on the OSE Server article https://wiki.opensourceecology.org/wiki/OSE_Server#Non-dedicated_baremetal_concerns
- ...
- while I wait for hetzner support's response for how to access all the files for the addon domains, I'll copy the finished backups from the other 5x domains (as opposed to addon domains) to hetnzer2 (osemain, osecivi, oseblog, oseforum, and microft), staging them for upload to glacier
- osemain's backups (after compression) came to a total of 3.3G
osemain@dedi978:~$ date Wed Jul 11 19:42:29 CEST 2018 osemain@dedi978:~$ pwd /usr/home/osemain osemain@dedi978:~$ whoami osemain osemain@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/* 2.9G noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_home.tar.bz2 1.2M noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-openswh.20180706-224656.sql.bz2 192K noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_fef.20180706-224656.sql.bz2 164K noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osesurv.20180706-224656.sql.bz2 4.0K noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_website.20180706-224656.sql.bz2 204M noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osewiki.20180706-224656.sql.bz2 212M noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_webroot.tar.bz2 osemain@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/ 3.3G noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/ osemain@dedi978:~$
- osecivi's backups (after compression) came to a total of 15M
osecivi@dedi978:~$ date Wed Jul 11 19:49:44 CEST 2018 osecivi@dedi978:~$ pwd /usr/home/osecivi osecivi@dedi978:~$ whoami osecivi osecivi@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/* 2.3M noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2 1.1M noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2 180K noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2 12M noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2 osecivi@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/ 15M noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/ osecivi@dedi978:~$
- oseblog's backups (after compression) came to a total of 4.4G
oseblog@dedi978:~$ date Wed Jul 11 19:58:51 CEST 2018 oseblog@dedi978:~$ pwd /usr/home/oseblog oseblog@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/* 1.3G noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_home.tar.bz2 135M noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_mysqldump-oseblog.20180706-234052.sql.bz2 3.1G noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_webroot.tar.bz2 oseblog@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/ 4.4G noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/ oseblog@dedi978:~$
- oseforum's backups (after compression) came to a total of 956M
oseforum@dedi978:~$ date Wed Jul 11 20:02:04 CEST 2018 oseforum@dedi978:~$ pwd /usr/home/oseforum oseforum@dedi978:~$ whoami oseforum oseforum@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/* 854M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 46M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 57M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2 oseforum@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/ 956M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/ oseforum@dedi978:~$
- microft's backups (after compression) came to a total of 6.2G
microft@dedi978:~$ date Wed Jul 11 20:06:19 CEST 2018 microft@dedi978:~$ pwd /usr/home/microft microft@dedi978:~$ whoami microft microft@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/* 1.4G noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_home.tar.bz2 528K noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_db2.20180706-234228.sql.bz2 1.3M noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_drupal1.20180706-234228.sql.bz2 3.3G noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_wiki.20180706-234228.sql.bz2 1.7G noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_webroot.tar.bz2 microft@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/ 6.2G noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/ microft@dedi978:~$
- therefore, the total for the 5x domains (exculding addon domains) dropped from ~34.87G before compression to 14.871G after compression
- that's a totally reasonable size to backup. In fact, I think I'll leave some of these backups live on hetzner2. I should definitely do so for the forum, in-case we ever want to make that site rw again.
- I went ahead and created the "hot" backup of the oseforums in the cooresponding apache dir
[root@hetzner2 forum.opensourceecology.org]# date Wed Jul 11 19:16:55 UTC 2018 [root@hetzner2 forum.opensourceecology.org]# pwd /var/www/html/forum.opensourceecology.org [root@hetzner2 forum.opensourceecology.org]# du -sh * 955M final_backup_before_hetzner1_deprecation_oseforum_20180706-230007 2.7G htdocs 4.0K readme.txt 173M vanilla_docroot_backup.20180113 [root@hetzner2 forum.opensourceecology.org]# du -sh final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/* 853M final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 46M final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 57M final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2 [root@hetzner2 forum.opensourceecology.org]#
- I created a readme.txt explaining what happened for the future sysadmin
[root@hetzner2 forum.opensourceecology.org]# cat readme.txt In 2018, the forums were no longer moderated or maintained, and the decision was made to deprecate support for the site. The content is still accessible in as static-content; new content is not possible. For more information, please see: * https://wiki.opensourceecology.org/wiki/CHG-2018-02-04 On 2018-07-11, during the backup stage of the change to deprecate hetzner1, a backup of the vanilla forums home directory, webroot directory, and database dump were created for upload to long-term backup storage on hetzner1. Because this backup size was manageable small (1G, which is actually smaller than the 2.7G of static content currently live in the forum's docroot), I put a "hot" copy of this dump in the forum's apache dir (but outside the htdocs root, of course) located at hetzner2:/var/www/html/forum.opensourceecology.org/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/ * https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation -- Michael Altfield <michael@opensourceecology.org> 2018-07-11 [root@hetzner2 forum.opensourceecology.org]#
- and, finally, I updated the relevant wiki articles for the forums
- I scp'd all these tarballs to hetzner2
[root@hetzner2 deprecateHetzner1]# date Wed Jul 11 18:17:14 UTC 2018 [root@hetzner2 deprecateHetzner1]# pwd /var/tmp/deprecateHetzner1 [root@hetzner2 deprecateHetzner1]# du -sh /var/tmp/deprecateHetzner1/* 6.2G /var/tmp/deprecateHetzner1/microft 4.4G /var/tmp/deprecateHetzner1/oseblog 15M /var/tmp/deprecateHetzner1/osecivi 955M /var/tmp/deprecateHetzner1/oseforum 3.3G /var/tmp/deprecateHetzner1/osemain [root@hetzner2 deprecateHetzner1]# du -sh /var/tmp/deprecateHetzner1/ 15G /var/tmp/deprecateHetzner1/ [root@hetzner2 deprecateHetzner1]#
- I still need to generate the metadata files that explains what these tarballs hold with a message + file list (`tar -t`). This will also hopefully serve as a test to validate that the files were not corrupted in-transit during the scp or tar creation. I generally prefer rsync so I can double-tap, but I has some issues with ssh key auth with rsync (so I just used scp, which auth'd fine).
- of course, I also still need to generate the backup tarballs for the addon domains after hetzner gets back to me on how to access their web roots.
- ...
- I began looking back at the hancock:/home/marcin_ose/backups/uploadToGlacier.sh file that I used back in March to generate metadata files for each of the encrypted tarballs dumped onto glacier https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q1#Sat_Mar_31.2C_2018
hancock% cat uploadToGlacier.sh #!/bin/bash -x ############ # SETTINGS # ############ #backupDirs="hetzner2/20171101-072001" #backupDirs="hetzner1/20170901-052001" #backupDirs="hetzner1/20171001-052001" #backupDirs="hetzner1/20171101-062001 hetzner1/20171201-062001" #backupDirs="hetzner1/20171201-062001" backupDirs="hetzner2/20170702-052001 hetzner2/20170801-072001 hetzner2/20170901-072001 hetzner2/20171001-072001 hetzner2/20171101-072001 hetzner2/20171202-072001 hetzner2/20180102-072001 hetzner2/20180202-072001 hetzner2/20180302-072001 hetzner2/20180401-072001 hetzner1/20170701-052001 hetzner1/20170801-052001 hetzner1/20180101-062001 hetzner1/20180201-062001 hetzner1/20180301-062002 hetzner1/20180401-052001" syncDir="/home/marcin_ose/backups/uploadToGlacier" encryptionKeyFilePath="/home/marcin_ose/backups/ose-backups-cron.key" export AWS_ACCESS_KEY_ID='<obfuscated>' export AWS_SECRET_ACCESS_KEY='<obfuscated>' ############## # DO UPLOADS # ############## for dir in $(echo $backupDirs); do archiveName=`echo ${dir} | tr '/' '_'`; timestamp=`date -u --rfc-3339=seconds` fileListFilePath="${syncDir}/${archiveName}.fileList.txt" archiveFilePath="${syncDir}/${archiveName}.tar" ######################### # archive metadata file # ######################### # first, generate a file list to help the future sysadmin get metadata about the archvie without having to download the huge archive itself echo "================================================================================" > "${fileListFilePath}" echo "This file is metadata for the archive '${archiveName}'. In it, we list all the files included in the compressed/encrypted archive (produced using 'ls -lahR ${dir}'), including the files within the tarballs within the archive (produced using 'find "${dir}" -type f -exec tar -tvf '{}' \; ')" >> "${fileListFilePath}" echo "" >> "${fileListFilePath}" echo " - Michael Altfield <maltfield@opensourceecology.org>" >> "${fileListFilePath}" echo "" >> "${fileListFilePath}" echo " Note: this file was generated at ${timestamp}" >> "${fileListFilePath}" echo "================================================================================" >> "${fileListFilePath}" echo "#############################" >> "${fileListFilePath}" echo "# 'ls -lahR' output follows #" >> "${fileListFilePath}" echo "#############################" >> "${fileListFilePath}" ls -lahR ${dir} >> "${fileListFilePath}" echo "================================================================================" >> "${fileListFilePath}" echo "############################" >> "${fileListFilePath}" echo "# tarball contents follows #" >> "${fileListFilePath}" echo "############################" >> "${fileListFilePath}" find "${dir}" -type f -exec tar -tvf '{}' \; >> "${fileListFilePath}" echo "================================================================================" >> "${fileListFilePath}" # compress the metadata file bzip2 "${fileListFilePath}" # encrypt the metadata file #gpg --symmetric --cipher-algo aes --armor --passphrase-file "${encryptionKeyFilePath}" "${fileListFilePath}.bz2" gpg --symmetric --cipher-algo aes --passphrase-file "${encryptionKeyFilePath}" "${fileListFilePath}.bz2" # delete the unencrypted archive rm "${fileListFilePath}" # upload it #/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}_metadata.txt.bz2.asc: this is a metadata file showing the file and dir list contents of the archive of the same name" --body "${fileListFilePath}.bz2.asc" #/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}_metadata.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same name" --body "${fileListFilePath}.bz2.gpg" /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 "${fileListFilePath}.bz2.gpg" if $? -eq 0 ; then rm -f "${fileListFilePath}.bz2.gpg" fi ################ # archive file # ################ # generate archive file as a single, compressed file tar -cvf "${archiveFilePath}" "${dir}/" # encrypt the archive file #gpg --symmetric --cipher-algo aes --armor --passphrase-file "${encryptionKeyFilePath}" "${archiveFilePath}" gpg --symmetric --cipher-algo aes --passphrase-file "${encryptionKeyFilePath}" "${archiveFilePath}" # delete the unencrypted archive rm "${archiveFilePath}" # upload it #/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}.tar.gz.asc: this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates" --body "${archiveFilePath}.asc" #/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}.tar.gz.gpg: this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates" --body "${archiveFilePath}.gpg" /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 "${archiveFilePath}.gpg" if $? -eq 0 ; then rm -f "${archiveFilePath}.gpg" fi done hancock%
Tue Jul 10, 2018
- hetzner got back to me as expected stating that it's an addon domain. It's hard to convey via email (plus through a language barrier) that I'm aware of the fact that there's an addon domain of the same name = survey. But there is a distinct directory & user unlike other addon domains on the physical server that I cannot access. I'm assuming it was previously used as a non-addon domain, then an addon domain was created. Or something. In any case, there is an actual directory '/usr/home/osesurv' that I need to access. I replied to them asking for an `ls -lah /usr/home/osesurv` to be sent to me.
- Marcin forwarded an error report from google's webmaster tools. It showed 1 issues; I'm not concerned. It has a lot of false-positives (special pages, robots, etc)
- Marcin sent me emails about two users who have not received emails (containing their temp password) after registering for an account on the wiki
- Miles Ransaw <milesransaw@gmail.com>
- Harman Bains <bains.hmn@gmail.com>
- this is extremely frustrating, as it appears that mediawiki does send emails most of the time, but occasionally users complain that emails do not come in (even after checking the spam folder). I can't find a way to reproduce this issue, so what I really need to do is find some logs containing the user's names/emails above.
- did some digging around the source code & confirmed that mediawiki falls-back on using the mail() function of php in includes/mail/UserMailer.php:sendInternal(). It looks like it also has support to use the PEAR mailer as well. There's a debug line that indicates when it has to fall-back to mail().
if ( !stream_resolve_include_path( 'Mail/mime.php' ) ) { wfDebug( "PEAR Mail_Mime package is not installed. Falling back to text email.\n" );
- checking with `pear list`, it looks like we don't have PEAR Mail_Mime installed
[root@hetzner2 ~]# pear list ... INSTALLED PACKAGES, CHANNEL PEAR.PHP.NET: ========================================= PACKAGE VERSION STATE Archive_Tar 1.4.2 stable Console_Getopt 1.4.1 stable PEAR 1.10.4 stable Structures_Graph 1.1.1 stable XML_Util 1.4.2 stable [root@hetzner2 ~]#
- I even checked to see if it was a file within the "PEAR" package. It isn't there.
[root@hetzner2 ~]# pear list-files PEAR | grep -i mail PHP Warning: ini_set() has been disabled for security reasons in /usr/share/pear/pearcmd.php on line 30 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 [root@hetzner2 ~]# pear list-files PEAR | grep -i mime PHP Warning: ini_set() has been disabled for security reasons in /usr/share/pear/pearcmd.php on line 30 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813 [root@hetzner2 ~]#
- compare this to our old server, and we see a discrepancy! The old server has this module. Perhaps this is the issue?
osemain@dedi978:~$ pear list Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.4.3 stable Console_Getopt 1.4.1 stable DB 1.7.14 stable Date 1.4.7 stable File 1.3.0 stable HTTP 1.4.1 stable HTTP_Request 1.4.4 stable Log 1.12.8 stable MDB2 2.5.0b5 beta Mail 1.2.0 stable Mail_Mime 1.8.9 stable Mail_mimeDecode 1.5.5 stable Net_DIME 1.0.2 stable Net_IPv4 1.3.4 stable Net_SMTP 1.6.2 stable Net_Socket 1.0.14 stable Net_URL 1.0.15 stable PEAR 1.10.5 stable SOAP 0.13.0 beta Structures_Graph 1.1.1 stable XML_Parser 1.3.4 stable XML_Util 1.4.2 stable osemain@dedi978:~$
- a quick yum search shows the package (we don't fucking want to use the pear package manager)
[root@hetzner2 ~]# yum search pear | grep -i mail php-channel-swift.noarch : Adds swift mailer project channel to PEAR php-pear-Mail.noarch : Class that provides multiple interfaces for sending : emails php-pear-Mail-Mime.noarch : Classes to create MIME messages php-pear-Mail-mimeDecode.noarch : Class to decode mime messages [root@hetzner2 ~]#
- so I _could_ install this, but I really want to develop some test that proves it doesn't work, then install. Then re-test & confirm it's fixed.
- it looks like we can trigger Mediawiki sending a user an email via this Special:EmailUser page https://wiki.opensourceecology.org/index.php?title=Special:EmailUser/
- well, I did that, and the email went through. It came from 'contact@opensourceecology.org'
- unfortunately, it looks like wiki-error.log hasn't populated since Jun 18th. I was monitoring the other logs when I triggered the EmailUser above; it was shown in access_log, and nothing came into error_log
- I changed the permissions of this wiki-error.log file to "not-apache:apache-admins" and 0660, and it began writing
- hopefully this won't fill the disk! iirc, mediawiki has some mechanism to prevent infinite growth..
- I re-triggered the email, but (surprisingly), I saw very little Mail-related info in the wiki-error.log file, except
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org> Sending mail via internal mail() function MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed User: loading options for user 3709 from override cache. OutputPage::sendCacheControl: private caching; ** [error] [W0U@lnhB25P5rXweNx8gYQAAAAs] /wiki/Special:EmailUser ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons
- that "ini_set() has been disabled for security reasons" occurs all the time; it shouldn't be an issue. Indeed, the email came through. What I was expecting to see was the "PEAR Mail_Mime package is not installed. Falling back to text email" message. It didn't appear.
- I opened the wiki-error.log file in vim, and then I did find this:
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org> Sending mail via internal mail() function
- whatever, different logic location. that's a good enough test. let me try to install the pear module, retry the email send. If everything still works, I'll ask the users to try again. Maybe that will just fix it. In any case, it appears that having pear may make it easier to debug.
[root@hetzner2 wiki.opensourceecology.org]# yum install php-pear-Mail-Mime ... Installed: php-pear-Mail-Mime.noarch 0:1.10.2-1.el7 Complete! [root@hetzner2 wiki.opensourceecology.org]#
- I re-triggered the email to send. It came in, and the log still says it's using the 'mail() function
[root@hetzner2 wiki.opensourceecology.org]# tail -f wiki-error.log | grep -C3 -i mail [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'. IP: 127.0.0.1 Start request POST /wiki/Special:EmailUser HTTP HEADERS: X-REAL-IP: 104.51.202.137 X-FORWARDED-PROTO: https -- ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 ACCEPT-LANGUAGE: en-US,en;q=0.8 ACCEPT-ENCODING: gzip, deflate, br REFERER: https://wiki.opensourceecology.org/wiki/Special:EmailUser COOKIE: donot=cacheme; osewiki_db_wiki__session=tglqeps7foc00ah128mjrap1la56qdlj; osewiki_db_wiki_UserID=3709; osewiki_db_wiki_UserName=Maltfield DNT: 1 UPGRADE-INSECURE-REQUESTS: 1 -- "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [error] [W0VBqahT@xRKhqoY@veLxgAAAAU] /wiki/Special:EmailUser ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons #0 [internal function]: MWExceptionHandler::handleError(integer, string, string, integer, array) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(693): ini_set(string, string) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(129): Wikimedia\Rdbms\Database->installErrorHandler() -- #10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(777): wfGetDB(integer) #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(396): User::idFromName(string, integer) #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(2230): User->load() #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(223): User->getId() #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(205): SpecialEmailUser::validateTarget(User, User) #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(47): SpecialEmailUser::getTarget(string, User) #16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPage.php(488): SpecialEmailUser->getDescription() #17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(116): SpecialPage->setHeaders() #18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPage.php(522): SpecialEmailUser->execute(NULL) #19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPageFactory.php(578): SpecialPage->run(NULL) #20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(287): SpecialPageFactory::executePath(Title, RequestContext) #21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(851): MediaWiki->performRequest() -- User::getBlockedStatus: checking... User: loading options for user 3709 from override cache. User: loading options for user 3709 from override cache. UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org> Sending mail via internal mail() function MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed User: loading options for user 3709 from override cache. OutputPage::sendCacheControl: private caching; ** [error] [W0VBqahT@xRKhqoY@veLxgAAAAU] /wiki/Special:EmailUser ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons #0 [internal function]: MWExceptionHandler::handleError(integer, string, string, integer, array) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(693): ini_set(string, string) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(129): Wikimedia\Rdbms\Database->installErrorHandler()
- I'm tired of these errors; I commented out line 693 in includes/libs/rdbms/database/Database.php:installErrorHandler()
/** * Set a custom error handler for logging errors during database connection */ protected function installErrorHandler() { $this->mPHPError = false; #$this->htmlErrors = ini_set( 'html_errors', '0' ); set_error_handler( [ $this, 'connectionErrorLogger' ] ); }
- that cleans up the output at least
[root@hetzner2 wiki.opensourceecology.org]# tail -f wiki-error.log | grep -C3 -i mail [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'. IP: 127.0.0.1 Start request POST /wiki/Special:EmailUser HTTP HEADERS: X-REAL-IP: 104.51.202.137 X-FORWARDED-PROTO: https -- ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 ACCEPT-LANGUAGE: en-US,en;q=0.8 ACCEPT-ENCODING: gzip, deflate, br REFERER: https://wiki.opensourceecology.org/wiki/Special:EmailUser COOKIE: donot=cacheme; osewiki_db_wiki__session=tglqeps7foc00ah128mjrap1la56qdlj; osewiki_db_wiki_UserID=3709; osewiki_db_wiki_UserName=Maltfield DNT: 1 UPGRADE-INSECURE-REQUESTS: 1 -- User::getBlockedStatus: checking... User: loading options for user 3709 from override cache. User: loading options for user 3709 from override cache. UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org> Sending mail via internal mail() function MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed
- curiously, as a test per this article, I wrote a simple test.php script to mail() myself something; it failed https://www.mediawiki.org/wiki/Manual:$wgEnableEmail
[root@hetzner2 mail]# cat /var/www/html/wiki.opensourceecology.org/htdocs/test.php <?php # This is just a test to debug email issues; please delete this file # --Michael Altfield <michael@opensourceecology.org> 2018-07-10 # we set a cookie to prevent varnish from caching this page header( "Set-Cookie: donot=cacheme" ); mail( "michael@opensourceecology.org", "my subject", "my message body" ); ?> [root@hetzner2 mail]#
- while tailing the maillog, I see this when I trigger my test script
[root@hetzner2 mail]# tail -f /var/log/maillog Jul 10 23:41:16 hetzner2 postfix/pickup[11033]: DCFA7681EA4: uid=48 from=<apache> Jul 10 23:41:16 hetzner2 postfix/cleanup[23835]: DCFA7681EA4: message-id=<20180710234116.DCFA7681EA4@hetzner2.opensourceecology.org> Jul 10 23:41:16 hetzner2 postfix/qmgr[1631]: DCFA7681EA4: from=<apache@hetzner2.opensourceecology.org>, size=412, nrcpt=1 (queue active) Jul 10 23:41:17 hetzner2 postfix/smtp[23837]: DCFA7681EA4: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[108.177.119.27]:25, delay=0.27, delays=0.02/0/0.05/0.2, dsn=2.0.0, status=sent (250 2.0.0 OK 1531266077 s19-v6si3778539edc.383 - gsmtp) Jul 10 23:41:17 hetzner2 postfix/qmgr[1631]: DCFA7681EA4: removed
- but I see this when I load the mediawiki EmailUser page
Jul 10 23:42:06 hetzner2 postfix/pickup[11033]: 43A7F681EA4: uid=48 from=<contact@opensourceecology.org> Jul 10 23:42:06 hetzner2 postfix/cleanup[23835]: 43A7F681EA4: message-id=<osewiki_db-wiki_.5b45444e401807.54864375@wiki.opensourceecology.org> Jul 10 23:42:06 hetzner2 postfix/qmgr[1631]: 43A7F681EA4: from=<contact@opensourceecology.org>, size=983, nrcpt=1 (queue active) Jul 10 23:42:06 hetzner2 postfix/smtp[23837]: 43A7F681EA4: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[64.233.167.26]:25, delay=0.29, delays=0.01/0/0.07/0.22, dsn=2.0.0, status=sent (250 2.0.0 OK 1531266126 j140-v6si391690wmd.76 - gsmtp) Jul 10 23:42:06 hetzner2 postfix/qmgr[1631]: 43A7F681EA4: removed
- so further research & digging into the code suggests that the PEAR module is only used if we want to use an external SMTP server. We don't; we want to use our local stmp server. The default is to use mail(). Since the $wgSMTP var wasn't set on the old server, the old server should have also been using mail() https://www.mediawiki.org/wiki/Manual:$wgSMTP
- finally, I decided to grep the maillog for one of the users = milesransaw@gmail.com. I got an error that appears to have come from Google regarding "IPv6 sending guidlines"
[root@hetzner2 htdocs]# grep -irC5 'milesransaw@gmail.com' /var/log /var/log/maillog-20180710-Jul 9 22:21:08 hetzner2 postfix/scache[24613]: statistics: address lookup hits=0 miss=4 success=0% /var/log/maillog-20180710-Jul 9 22:21:08 hetzner2 postfix/scache[24613]: statistics: max simultaneous domains=1 addresses=2 connection=2 /var/log/maillog-20180710-Jul 9 22:25:33 hetzner2 postfix/pickup[24510]: 5039E681DE9: uid=48 from=<contact@opensourceecology.org> /var/log/maillog-20180710-Jul 9 22:25:33 hetzner2 postfix/cleanup[25828]: 5039E681DE9: message-id=<osewiki_db-wiki_.5b43e0dd4ba6c0.09234917@wiki.opensourceecology.org> /var/log/maillog-20180710-Jul 9 22:25:33 hetzner2 postfix/qmgr[1631]: 5039E681DE9: from=<contact@opensourceecology.org>, size=1236, nrcpt=1 (queue active) /var/log/maillog-20180710:Jul 9 22:25:33 hetzner2 postfix/smtp[25830]: 5039E681DE9: to=<milesransaw@gmail.com>, relay=gmail-smtp-in.l.google.com[2a00:1450:400c:c0c::1b]:25, delay=0.36, delays=0.02/0/0.05/0.3, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[2a00:1450:400c:c0c::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . c19-v6si14396804wrc.112 - gsmtp (in reply to end of DATA command)) /var/log/maillog-20180710-Jul 9 22:25:33 hetzner2 postfix/cleanup[25828]: A808D681EA4: message-id=<20180709222533.A808D681EA4@hetzner2.opensourceecology.org> /var/log/maillog-20180710-Jul 9 22:25:33 hetzner2 postfix/bounce[25832]: 5039E681DE9: sender non-delivery notification: A808D681EA4 /var/log/maillog-20180710-Jul 9 22:25:33 hetzner2 postfix/qmgr[1631]: A808D681EA4: from=<>, size=3974, nrcpt=1 (queue active) /var/log/maillog-20180710-Jul 9 22:25:33 hetzner2 postfix/qmgr[1631]: 5039E681DE9: removed /var/log/maillog-20180710-Jul 9 22:25:33 hetzner2 postfix/smtp[25830]: A808D681EA4: to=<contact@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c0c::1b]:25, delay=0.15, delays=0/0/0.06/0.08, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c0c::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . s9-v6si13928170wrm.364 - gsmtp (in reply to end of DATA
- I did another search for the other user = bains.hmn@gmail.com. Interestingly, I got no results at all this time
[root@hetzner2 htdocs]# grep -irC5 'bains.hmn' /var/log [root@hetzner2 htdocs]#
- when I try to email this user using Special:EmailUser, I get an error = "This user has not specified a valid email address."
- digging into the DB, I see this user set their email to 'bains.hmn@gmail.com', which seems fine to me
MariaDB [osewiki_db]> select user_email from wiki_user where user_name = 'Hbains' limit 10; +---------------------+ | user_email | +---------------------+ | bains.hmn@gmail.com | +---------------------+ 1 row in set (0.00 sec) MariaDB [osewiki_db]>
- anyway, let me continue with the one that's not a dead-end. Unfortunately, the IPv6AuthError link just sends me to a generic google "Bulk Sending Guidelines" doc https://support.google.com/mail/?p=IPv6AuthError
- I already configured our spf records, but it wants some more shit. First I have to get the "Gmail settings administrator privliges" from Marcin.
- actually, the error log specifically mentions "550-5.7.1" more on this specific error number can be found in this "SMTP Error Reference" https://support.google.com/a/answer/3726730?hl=en
- 550, "5.7.1", Email quota exceeded.
- 550, "5.7.1", Invalid credentials for relay.
- 550, "5.7.1", Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been blocked. Review our Bulk Senders Guidelines.
- 550, "5.7.1", Our system has detected that this message is likely unsolicited mail. To reduce the amount of spam sent to Gmail, this message has been blocked. For more information, review this article.
- 550, "5.7.1", The IP you're using to send mail is not authorized to send email directly to our servers. Please use the SMTP relay at your service provider instead. For more information, review this article.
- 550, "5.7.1", The user or domain that you are sending to (or from) has a policy that prohibited the mail that you sent. Please contact your domain administrator for further details. For more information, review this article.
- 550, "5.7.1", Unauthenticated email is not accepted from this domain.
- 550, "5.7.1", Daily SMTP relay limit exceeded for customer. For more information on SMTP relay sending limits please contact your administrator or review this article.
- actually, I don't think any of those are correct. This appears to be caused by us not having a "AAAA" dns entry pointing to our ipv6 address, even though our server has 2x ipv6 addresses. In this case, it appears that our server contacted google from ipv6 address = "2a01:4f8:172:209e::2", but it didn't get that back when it attempted to resolve 'opensourceecology.org' https://serverfault.com/questions/732187/sendmail-can-not-deliver-to-gmail-ipv6-sending-guidelines-regarding-ptr-record
[root@hetzner2 htdocs]# ip -6 a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000 inet6 2a01:4f8:172:209e::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::921b:eff:fe94:7c4/64 scope link valid_lft forever preferred_lft forever [root@hetzner2 htdocs]#
- I created a AAAA (ipv6 A) dns record (on cloudflare) pointing opensourceecology.org to 2a01:4f8:172:209e::2
- ^ that should take some time to propagate, and--since I can't reproduce the issue, we'll just wait to see if it occurs again & check the logs again
- a simpler solution might be to just change postfix to use ipv4 only, but I'll do that as a last resort https://www.linuxquestions.org/questions/linux-newbie-8/gmail-this-message-does-not-meet-ipv6-sending-guidelines-regarding-ptr-records-4175598760/
- note that, interestingly, the ptr (reverse lookup) of our ipv4 addresses don't point to opensourceecology.org; they point to hetzner
user@personal:~$ dig +short -x 138.201.84.223 static.223.84.201.138.clients.your-server.de. user@personal:~$ dig +short -x 138.201.84.243 static.243.84.201.138.clients.your-server.de. user@personal:~$
- I'll have to check this tomorrow after propagation takes place. Hopefully if it matches the above commands, we've fixed the issue
user@personal:~$ dig +short -x "2a01:4f8:172:209e::2" user@personal:~$
- in the meantime, I've manually reset the users' passwords & sent them emails manually
- ...
- Marcin had a 403 false-positive when attempting to embed an instagram feed. I whitelisted a rule & confirmed that I could submit the contents.
- id 973308, xss attack
- this fixed it & I emailed Marcin
- ...
- Marcin mentioned that a link to our wiki in a facebook feed shows a 403 on facebook. The link works, but the facebook "preview" in the comment feed shows a 403 Forbidden. Because facebook is dumb, I can't permalink directly to the comment (or maybe I could if I had a facebook account--not sure), but it's on this page https://www.facebook.com/groups/398759490316633/#
- I grepped through all the gzip'd modsecurity log files with the string 'Paysan' in it, and I found a bunch of results. I limited it further to include 'facebook', and found there was a useragent = facebookexternalhit/1.1. This was causing a 403 from rule id = 958291, protocol violation = "Range: field exists and begins with 0."
[root@hetzner2 httpd]# date Wed Jul 11 04:09:26 UTC 2018 [root@hetzner2 httpd]# pwd /var/log/httpd [root@hetzner2 httpd]# for log in $(ls -1 | grep -i modsec | grep -i gz); do zcat $log | grep -iC50 'Paysan' ; done | grep -iC50 facebook Server: Apache Engine-Mode: "ENABLED" --f6f9de1f-Z-- --1d0c4d75-A-- [08/Jul/2018:06:58:57 +0000] W0G2MRiIV543eZ9b0krEgAAAAAE 127.0.0.1 37540 127.0.0.1 8000 --1d0c4d75-B-- GET /entry/openid?Target=discussion%2F379%2Ffarming-agriculture-and-ranching-livestock-management-software%3Fpost%23Form_Body&url=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid HTTP/1.1 X-Real-IP: 203.133.174.77 X-Forwarded-Proto: https X-Forwarded-Port: 443 Host: forum.opensourceecology.org User-Agent: Mozilla/5.0 (compatible; Daum/4.1; +http://cs.daum.net/faq/15/4118.html?faqId=28966) Accept-Language: ko-kr,ko;q=0.8,en-us;q=0.5,en;q=0.3 Accept: */* Accept-Charset: utf-8,EUC-KR;q=0.7,*;q=0.5 X-Forwarded-For: 203.133.174.77, 127.0.0.1, 127.0.0.1 hash: #forum.opensourceecology.org Accept-Encoding: gzip X-Varnish: 13886122 --1d0c4d75-F-- HTTP/1.1 403 Forbidden Content-Length: 214 Content-Type: text/html; charset=iso-8859-1 --1d0c4d75-E-- --1d0c4d75-H-- Message: Access denied with code 403 (phase 2). Pattern match "([\\~\\!\\@\\#\\$\\%\\^\\&\\*\\(\\)\\-\\+\\=\\{\\}\\[\\]\\|\\:\\;\"\\'\\\xc2\xb4\\\xe2\x80\x99\\\xe2\x80\x98\\`\\<\\>].*?){4,}" at ARGS:Target. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "159"] [id "981173"] [rev "2"] [msg "Restricted SQL Character Anomaly Detection Alert - Total # of special characters exceeded"] [data "Matched Data: - found within ARGS:Target: discussion/379/farming-agriculture-and-ranching-livestock-management-software?post#Form_Body"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] Action: Intercepted (phase 2) Stopwatch: 1531033137000684 589 (- - -) Stopwatch2: 1531033137000684 589; combined=362, p1=87, p2=247, p3=0, p4=0, p5=27, sr=20, sw=1, l=0, gc=0 Response-Body-Transformed: Dechunked Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9. Server: Apache Engine-Mode: "ENABLED" --1d0c4d75-Z-- --52e6a01c-A-- [08/Jul/2018:06:59:50 +0000] W0G2ZlraFr00R9M6JipfIQAAAAI 127.0.0.1 37638 127.0.0.1 8000 --52e6a01c-B-- GET /wiki/L%E2%80%99Atelier_Paysan HTTP/1.0 X-Real-IP: 66.220.146.185 X-Forwarded-Proto: https X-Forwarded-Port: 443 Host: wiki.opensourceecology.org Accept: */* User-Agent: facebookexternalhit/1.1 Range: bytes=0-131071 X-Forwarded-For: 127.0.0.1 Accept-Encoding: gzip X-Varnish: 14190516 --52e6a01c-F-- HTTP/1.1 403 Forbidden Content-Length: 225 Connection: close Content-Type: text/html; charset=iso-8859-1 --52e6a01c-E-- --52e6a01c-H-- Message: Access denied with code 403 (phase 2). String match "bytes=0-" at REQUEST_HEADERS:Range. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "428"] [id "958291"] [rev "2"] [msg "Range: field exists and begins with 0."] [data "bytes=0-131071"] [severity "WARNING"] [ver "OWASP_CRS/2.2.9"] [maturity "6"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/INVALID_HREQ"] Action: Intercepted (phase 2) Apache-Handler: php5-script Stopwatch: 1531033190783654 371 (- - -) Stopwatch2: 1531033190783654 371; combined=130, p1=87, p2=14, p3=0, p4=0, p5=29, sr=22, sw=0, l=0, gc=0 Response-Body-Transformed: Dechunked Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9. Server: Apache Engine-Mode: "ENABLED" --52e6a01c-Z-- --282b2851-A-- [08/Jul/2018:06:59:51 +0000] W0G2ZxiIV543eZ9b0krEgQAAAAE 127.0.0.1 37642 127.0.0.1 8000 --282b2851-B-- GET /wiki/L%E2%80%99Atelier_Paysan HTTP/1.0 X-Real-IP: 31.13.122.23 X-Forwarded-Proto: https X-Forwarded-Port: 443 Host: wiki.opensourceecology.org Accept: */* User-Agent: facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php) Range: bytes=0-524287 X-Forwarded-For: 127.0.0.1 Accept-Encoding: gzip X-Varnish: 13886168 --282b2851-F-- HTTP/1.1 403 Forbidden Content-Length: 225 Connection: close Content-Type: text/html; charset=iso-8859-1 --282b2851-E-- --282b2851-H-- Message: Access denied with code 403 (phase 2). String match "bytes=0-" at REQUEST_HEADERS:Range. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "428"] [id "958291"] [rev "2"] [msg "Range: field exists and begins with 0."] [data "bytes=0-524287"] [severity "WARNING"] [ver "OWASP_CRS/2.2.9"] [maturity "6"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/INVALID_HREQ"] Action: Intercepted (phase 2) Apache-Handler: php5-script Stopwatch: 1531033191244028 353 (- - -) Stopwatch2: 1531033191244028 353; combined=129, p1=85, p2=14, p3=0, p4=0, p5=30, sr=20, sw=0, l=0, gc=0 Response-Body-Transformed: Dechunked Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9. Server: Apache Engine-Mode: "ENABLED" --282b2851-Z-- --41d26d46-A-- [08/Jul/2018:07:03:42 +0000] W0G3ThiIV543eZ9b0krEhAAAAAE 127.0.0.1 38196 127.0.0.1 8000 --41d26d46-B-- GET /entry/register?Target=discussion%2F541%2Fsolved-emailprocessor.php-sends-all-emails-to-039civimail.ignored039 HTTP/1.1 X-Real-IP: 96.73.213.217 X-Forwarded-Proto: https X-Forwarded-Port: 443 Host: forum.opensourceecology.org User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.2057.400 QQBrowser/9.5.10158.400 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 DNT: 1 X-Forwarded-For: 96.73.213.217, 127.0.0.1, 127.0.0.1 Accept-Encoding: gzip hash: #forum.opensourceecology.org X-Varnish: 14190751 --41d26d46-F-- [root@hetzner2 httpd]#
- I whitelisted this rule in the vhost config file
Fri Jul 06, 2018
- yesterday I calculated that we should backup about ~34.87G of data from hetzner1 to glacier before shutting down the node and terminating its contract
- note that this size will likely be much smaller after compression.
- I confirmed that we have 128G of available space to '/' on hetzner2
[root@hetzner2 ~]# date Fri Jul 6 17:59:12 UTC 2018 [root@hetzner2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 197G 60G 128G 32% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 32G 3.1G 29G 10% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/md1 488M 289M 174M 63% /boot tmpfs 6.3G 0 6.3G 0% /run/user/1005 [root@hetzner2 ~]#
- we also have 165G of available space on '/usr' on hetzner1
osemain@dedi978:~$ date Fri Jul 6 19:59:31 CEST 2018 osemain@dedi978:~$ pwd /usr/home/osemain osemain@dedi978:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/dm-0 9.8G 363M 8.9G 4% / udev 10M 0 10M 0% /dev tmpfs 787M 788K 786M 1% /run /dev/dm-1 322G 142G 165G 47% /usr tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/md0 95M 30M 66M 32% /boot /dev/mapper/vg-tmp 4.8G 308M 4.3G 7% /tmp /dev/mapper/vg-var 20G 2.3G 17G 13% /var tmpfs 2.0G 0 2.0G 0% /var/spool/exim/scan /dev/mapper/vg-vartmp 5.8G 1.8G 3.8G 32% /var/tmp osemain@dedi978:~$
- while it may make sense to do this upload to glacier on hetzern1, I've had hetzner1 terminate my screen sessions randomly in the past. I'd rather do it on hetzner2--where I actually have control over the server with root credentials. Therefore, I think I'll make the compressed tarballs on hetzner1 & scp them to hetzner2. On hetzner2 I'll encrypt the tarballs and create their (also encrypted) corresponding metadata files (listing all the files in the tarballs, for easy/cheaper querying later), and upload both.
- I created a wiki article for this CHG, which will be the canonical URL listed in the metadata files for info on what this data is that I've uploaded to glacier https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation
- I discovered that the DBs on hetzner1 are necessarily accessible to the public Internet (ugh).
- so I _could_ do the mysqldump from hetnzer2, but it's better to do it locally (data tx & sec), and then compress it _before_ sending it to hetzner2
- began backing-up files on osemain
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DBs dbName=openswh dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=ose_fef dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=ose_website dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=osesurv dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=osewiki dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- began backups on oseblog
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DB dbName=oseblog dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- began backups on osecivi
stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DBs dbName=osecivi dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=osedrupal dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- began backup of oseforum
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DB dbName=oseforum dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- began backup of microft
# declare vars stamp=`date -u +%Y%m%d-%H%M%S` backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}" excludeArg1="${HOME}/backups" excludeArg2="${HOME}/noBackup" # prepare backup dir mkdir -p "${backupDir}" # backup home files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/* # backup web root files time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/* # dump DBs dbName=microft_db2 dbUser=microft_2 dbPass=CHANGEME time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=microft_drupal1 dbUser=microft_d1u dbPass=CHANGEME time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2" dbName=microft_wiki dbPass=CHANGEME time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
- after compression (but before encryption), here's the resulting sizes of the backups
- oseforum
oseforum@dedi978:~$ find noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; 57M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 46M noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 oseforum@dedi978:~$
- osecivi 16M
osecivi@dedi978:~/noBackup$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; 180K /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2 2.3M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2 12M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2 1.1M /usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2 osecivi@dedi978:~/noBackup$ <pre> ## oseforum 957M <pre> oseforum@dedi978:~$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \; 854M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2 46M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2 57M /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2 oseforum@dedi978:~$
- created a safe dir on hetzner2 to store the files before encrypting & uploading to glacier
[root@hetzner2 tmp]# cd /var/tmp [root@hetzner2 tmp]# mkdir deprecateHetzner1 [root@hetzner2 tmp]# chown root:root deprecateHetzner1/ [root@hetzner2 tmp]# chmod 0700 deprecateHetzner1/ [root@hetzner2 tmp]# ls -lah deprecateHetzner1/ total 8.0K drwx------ 2 root root 4.0K Jul 6 23:14 . drwxrwxrwt. 52 root root 4.0K Jul 6 23:14 .. [root@hetzner2 tmp]#
- ...
- while the backups were running on hetzner2, I began looking into migrating hetzner2's active daily backups to s3.
- I logged into the aws console for the first time in a couple months, and I saw that our first bill was $5.20 in May, $1.08 in June, and $1.08 in July. Not bad, but that's going to go up after we dump all this hetzner1 stuff in glacier & start using s3 for our dailys. In any case, it'll be far, far, far less than the amount we'll be saving by ending our contract for hetzner1!
- I created our first bucket in s3 named 'oseserverbackups'
- important: it was set to "do not grant public read access to this bucket" !
- looks like I already created an IAM user & creds with access to both glacier & s3. I added this to hetzner2:/root/backups/backup.settings
- I installed aws for the root user on hetzner2, added the creds, and confirmed that I could access the new bucket
# create temporary directory tmpdir=`mktemp -d` pushd "$tmpdir" /usr/bin/wget "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" /usr/bin/unzip awscli-bundle.zip ./awscli-bundle/install -b ~/bin/aws # cleanly exit popd /bin/rm -rf "$tmpdir" exit 0
[root@hetzner2 tmp.vbm56CUp50]# aws --version aws-cli/1.15.53 Python/2.7.5 Linux/3.10.0-693.2.2.el7.x86_64 botocore/1.10.52 [root@hetzner2 tmp.vbm56CUp50]# aws s3 ls Unable to locate credentials. You can configure credentials by running "aws configure". [root@hetzner2 tmp.vbm56CUp50]# aws configure AWS Access Key ID [None]: <obfuscated> AWS Secret Access Key [None]: <obfuscated> Default region name [None]: us-west-2 Default output format [None]: [root@hetzner2 tmp.vbm56CUp50]# aws s3 ls 2018-07-07 00:05:22 oseserverbackups [root@hetzner2 tmp.vbm56CUp50]#
# successfully tested an upload to s3 <pre> [root@hetzner2 backups]# cat /var/tmp/test.txt some file destined for s3 this is [root@hetzner2 backups]# aws s3 cp /var/tmp/test.txt s3://oseserverbackups/test.txt upload: ../../var/tmp/test.txt to s3://oseserverbackups/test.txt [root@hetzner2 backups]#
- confirmed that I could see the file in the aws console wui
- clicked the link for the object, and confirmed that I got an AccessDenied error https://s3-us-west-2.amazonaws.com/oseserverbackups/test.txt
- next step: enable lifecycle policy. Ideally, I want to be able to say that files uploaded on the first of the month (either by metadata of the upload timestamp or by regex match on object name) will automatically "freeze" into glaicer after a few days, and all other files will just get deleted automatically after a few days.
- so it looks like we can limit by object name match or by tag. It's probably better if we just have our script add a 'monthly-backup' tag to the object when uploading on the first-of-the-month, then have our lifecycle policy built around that bit https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html
- ugh, TIL s3 objects under the default storage class = STANDARD_IA have a minimum lifetime of 30 days. If you delete an object before 30 days, you're still charged for 30 days https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html
- that means we'll have to store 30 copies of our daily backups at minimum, which are 15G as of now. That's 450G stored to s3 = 0.023 * 450 = $10.35/mo * 12 = $124.2/yr. That sucks.
- per my previous research, we may want to look into using one of these providers instead:
- Backblaze B2 https://www.backblaze.com/b2/cloud-storage.html
- Google Nearline & Coldline https://cloud.google.com/storage/archival/
- Microsoft OneDrive https://onedrive.live.com/about/en-us/
- a quick calculation on the backblaze price calculator (biased, of course) with initial_upload=15G, monthly_upload=450G, monthly_delete=435G, monthly_download=3G gives a cost of $7.11/year. They say that would cost $30.15/yr on s3, $29.88/yr on google, and $26.10 on Microsoft. Well, at least they're wrong in a good way: it would cost more than that in s3. Hopefully they know their own pricing better. $8/year is great for backing-up 15G every day..
Thr Jul 05, 2018
- logged time for last week
- using my ose account, I uploaded the remaining misc photos from my visit to FeF to a new album https://photos.app.goo.gl/YZGTQdWnfFWcJc6p8
- I created a slideshow out of this & added it to the wiki here https://wiki.opensourceecology.org/wiki/Michael_Photo_Folder
- ...
- began revisiting hetzner1. We want to dump all the content onto glaicer before we terminate our contract here.
- I just checked the billing section. Wow, it's 74.79 eur per month. What a rip-off! Hopefully we won't have to pay that much longer..
- because we don't have root, this is more tricky. First, we need to get a list of all the users & investigate what data each has. If the total amount of data is small enough, we can just tar it all up & ship it to glaicer.
- it's not an exact test, but skimming through /etc/passwd suggests that there may be 11 users on hetzner1: osemain, osecivi, oseblog, oseforum, oseirc, oseholla, osesurv, sandbox, microft, zabbix, openswh
- a better test is probably checking which users' shells are /bin/bash
osemain@dedi978:~$ grep '/bin/bash' /etc/passwd root:x:0:0:root:/root:/bin/bash postgres:x:111:113:PostgreSQL administrator,,,:/var/lib/postgresql:/bin/bash osemain:x:1010:1010:opensourceecology.org:/usr/home/osemain:/bin/bash osecivi:x:1014:1014:civicrm.opensourceecology.org:/usr/home/osecivi:/bin/bash oseblog:x:1015:1015:blog.opensourceecology.org:/usr/home/oseblog:/bin/bash oseforum:x:1016:1016:forum.opensourceecology.org:/usr/home/oseforum:/bin/bash osesurv:x:1020:1020:survey.opensourceecology.org:/usr/home/osesurv:/bin/bash microft:x:1022:1022:test.opensourceecology.org:/usr/home/microft:/bin/bash osemain@dedi978:~$
- excluding postgres & root, it looks like 6x users (many of the others are addons, and I think they're under 'osemain') = osemain, osecivi, oseblog, oseforum, osesurv, and microft
osemain@dedi978:~$ ls -lah public_html/archive/addon-domains/ total 32K drwxr-xr-x 8 osemain users 4.0K Jan 18 16:56 . drwxr-xr-x 13 osemain osemain 4.0K Jun 20 2017 .. drwxr-xr-x 2 osemain users 4.0K Jul 26 2011 addontest drwx---r-x 2 osemain users 4.0K Jul 26 2011 holla drwx---r-x 2 osemain users 4.0K Jul 26 2011 irc drwxr-xr-x 2 osemain osemain 4.0K Jan 18 16:59 opensourcewarehouse.org drwxr-xr-x 2 osemain osemain 4.0K Feb 23 2012 sandbox drwxr-xr-x 13 osemain osemain 4.0K Dec 30 2017 survey osemain@dedi978:~$
- I was able to ssh in as osemain, osecivi, oseblog, and oseforum (using my pubkey, so I must have set this up earlier when investigating what I needed to migrate). I was _not_ able to ssh in as 'osesurv' and 'microft'
- on the main page of the konsoleh wui after logging in, there's 5 domains listed: "(blog|civicrm|forum|test).opensourceecology.org" and 'opensourceecology.org'. The one that stands out here is 'test.opensourceecology.org'. Upon clicking on it & digging around, I found that the username for this domain is 'microft'.
- In this test = microft domain (in the konsoleh wui), I tried to click 'WebFTP' (which is how I would upload my ssh key), but I got an erorr "Could not connect to server dedi978.your-server.de:21 with user microft". Indeed, it looks like the account is "suspended"
- to confirm further, I clicked the "FTP" link for the forum account, and confirmed that I could ftp in (ugh) as the user & password supplied by the wui (double-ugh). I tried again using the user/pass from the test=microft domain, and I could not login
- ^ that said, It *does* list it as using 4.49G of disk space + 3 DBs
- the 3 DBs are mysql = microft_db2 (24.3M), microft_drupal1 (29.7M), and microft_wiki (19.4G). Holy shit, 19.4G DB!
- digging into the last db's phpmyadmin, I see a table named "wiki_objectcache" that's 4.2G, "wiki_searchindex" that's 2.7G, and "wiki_text" that's 7.4G. This certainly looks like a Mediawiki DB.
- from the wiki_user table, the last user_id = 1038401 = Traci Clutter, which was created on 20150702040307
- I found that all these accounts are still accessible from a subdomain of our dedi978.your-server.de dns:
- http://blog.opensourceecology.org.dedi978.your-server.de/
- this one gives a 500 internal server error
- http://civicrm.opensourceecology.org.dedi978.your-server.de/
- this one actually loads a drupal page with a login, though the only content is " Welcome to OSE CRM / No front page content has been created yet."
- http://forum.opensourceecology.org.dedi978.your-server.de/
- this one still loads, and appears to be fully functional (ugh)
- http://test.opensourceecology.org.dedi978.your-server.de/
- this gives a 403 forbidden with the comment "You don't have permission to access / on this server." "Server unable to read htaccess file, denying access to be safe"
- http://blog.opensourceecology.org.dedi978.your-server.de/
- In digging through the test.opensourceecology.org domain's settings, I found "Services -> Settings -> Block / Unblock". It (unlike the others) was listed as "Status: Blocked." So I clicked the "Unblock it" button and got "The domain has been successfully unblocked.".
- now WebFTP worked
- this now loads too http://test.opensourceecology.org.dedi978.your-server.de/ ! It's pretty horribly broken, but it appears to be a "True Fans Drupal" "Microfunding Proposal" site. I wouldn't be surprised if it got "blocked" due to being a hacked outdated version of drupal.
- WebFTP didn't let me upload a .ssh dir (it appears to not work with hidden dirs = '.' prefix), but I was able to FTP in (ugh)
- I downloaded the existing .ssh/authorized_keys file, added my key to the end of it, and re-uploaded it
- I was able to successfully ssh-in!
- ok, now that I have access to what I believe to be all the accounts, let's determine what they've got in files
- I found a section of the hetzner konsoleh wui that shows sizes per account (Under Statistics -> Account overview)
- opensourceecology.org 99.6G
- blog.opensourceecology.org 8.71G
- test.opensourceecology.org 4.49G
- forum.opensourceecology.org 1.15G
- civicrm.opensourceecology.org 170M
- ^ all sites display "0G" for "Traffic"
- osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup'
osemain@dedi978:~$ date Fri Jul 6 01:20:41 CEST 2018 osemain@dedi978:~$ pwd /usr/home/osemain osemain@dedi978:~$ whoami osemain osemain@dedi978:~$ du -sh * --exclude='noBackup' 983M backups 1.3M bin 4.0K composer.json 36K composer.lock 4.0K cron 4.0K emails.txt 9.8M extensions 16K freemind.sourceforge.net 4.0K id-dsa-iphone.pub 4.0K id_rsa-hetzner 4.0K id_rsa-hetzner.pub 288K installer 0 jboss 470M jboss-4.2.3.GA 4.0K jboss-command-line.txt 234M jdk1.6.0_29 0 jdk-6 808K mbkp 0 opensourceecology.org 4.0K passwd.cdb 4.0K PCRE-patch 0 public_html 4.0K uc?id=0B1psBarfpPkzb0JQV1B6Z01teVk 28K users 16K var-run 2.9M vendor 4.0K videos 4.0K wiki_olddocroot 1.1M wrapper-linux-x86-64-3.5.13 2.6G www_logs osemain@dedi978:~$ du -sh --exclude='noBackup' 5.7G . osemain@dedi978:~$
- osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup'
- oseblog has 2.7G
oseblog@dedi978:~$ date Fri Jul 6 02:39:11 CEST 2018 oseblog@dedi978:~$ pwd /usr/home/oseblog oseblog@dedi978:~$ whoami oseblog oseblog@dedi978:~$ du -sh * 8.0K bin 0 blog.opensourceecology.org 12K cron 788K mbkp 349M oftblog.dump 4.0K passwd.cdb 0 public_html 208K tmp 104K users 1.2G www_logs oseblog@dedi978:~$ du -sh 2.7G . oseblog@dedi978:~$
- osecivi has 44M
osecivi@dedi978:~$ date Fri Jul 6 02:40:19 CEST 2018 osecivi@dedi978:~$ pwd /usr/home/osecivi osecivi@dedi978:~$ whoami osecivi osecivi@dedi978:~$ du -sh * 4.0K bin 0 civicrm.opensourceecology.org 4.0K civimail-errors.txt 2.0M CiviMail.ignored-2011 20K civimail.out 20K cron 2.5M d7-civicrm.dump 828K d7-drupal.dump 788K mbkp 2.2M oftcivi.dump 8.0M oftdrupal.dump 4.0K passwd.cdb 0 public_html 4.0K pw.txt 28K users 3.4M www_logs osecivi@dedi978:~$ du -sh 44M . osecivi@dedi978:~$
- oseforum has 1.1G
oseforum@dedi978:~$ date Fri Jul 6 02:41:04 CEST 2018 oseforum@dedi978:~$ pwd /usr/home/oseforum oseforum@dedi978:~$ whoami oseforum oseforum@dedi978:~$ du -sh * 8.0K bin 16K cron 0 forum.opensourceecology.org 788K mbkp 7.5M oftforum.dump 4.0K passwd.cdb 0 public_html 102M tmp 14M users 11M vanilla-2.0.18.1 756M www_logs oseforum@dedi978:~$ du -sh 1.1G . oseforum@dedi978:~$
- microft has 1.8G
microft@dedi978:~$ date Fri Jul 6 02:42:00 CEST 2018 microft@dedi978:~$ pwd /usr/home/microft microft@dedi978:~$ whoami microft microft@dedi978:~$ du -sh * 8.8M db-backup 3.6M drupal.sql 1.6M drush 44M drush-backups 1.7M git_repos 376M mbkp-wiki-db 18M mediawiki-1.20.2.tar.gz 4.0K passwd.cdb 0 public_html 28K users 1.3G www_logs microft@dedi978:~$ du -sh 1.8G . microft@dedi978:~$
- those numbers above are files only. It doesn't include mailboxes or databases. I don't really care about mailboxes (they're probably unused), but I do want to backup databases.
- osemain has 5 databases:
- openswh 7.51M
- ose_fef 3.65M
- ose_website 32M
- osesurv 697K
- osewiki 2.48G
- there doesn't appear to be any DBs for the 'addon' domains under this domain (addontest, holla, irc, opensourcwarehouse, sandbox, survey)
- oseblog has 1 db
- oseblog 1.23G
- osecivi has 2 dbs
- osecivi 31.3M
- osedrupal 8.05M
- oseforum has 1 db
- oseforum 182M
- microft has 3 dbs
- microft_db2 24.3M
- microft_drupal1 33.4M
- microft_wiki 19.5G
- so the total size of file data to backup are 5.7+2.7+0.04+1.1+1.8 = 11.34G
- and the total size of db data to backup is 0.007+0.003+0.03+0.001+2.48+1.23+0.03+0.08+0.1+0.02+0.02+0.03+19.5 = 23.53G
- therefore, the total size of db backups to push to glacier so we can feel safe permanently shutting down hetzner1 is 34.87G
- note that this size will likely be much smaller after compression.