Maltfield Log/2018 Q3

From Open Source Ecology
Jump to: navigation, search

My work log from the year 2018 Quarter 3. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.

See Also

  1. Maltfield_Log
  2. User:Maltfield
  3. Special:Contributions/Maltfield

Sat Sep 29, 2018

  1. Xheni got back to me stating that they could not decrypt my message :( I sent back the message in plaintext, this time to xheni@phplist.com
  2. Sam & Xheni commented on my bug to add links to an attribute name. I responded, asking where the escaping of the '<' is occurring, and if we could use the htmlentities() & html_entity_decode() functions https://mantis.phplist.org/view.php?id=19436
  1. ...
  1. I spent some time reviewing our munin graphs. Looks good.
  1. ...
  1. While it's still fresh on my mind, I created a wiki page on Discourse https://wiki.opensourceecology.org/wiki/Discourse
    1. Considering my security concerns for Discourse, I was shocked to discover that whonix uses it as well. Looks like they switched from smfforum. The switch was first discussed in 2015 here https://forums.whonix.org/t/change-whonix-forum-software-to-discourse/1181/2
    2. I found some other users of Discourse https://www.discourse.org/customers
      1. ubuntu https://discourse.ubuntu.com/
    3. I posted to the 3-year-old thread on the whonix forums, asking for their perspective on the security of self-hosting of Discourse https://forums.whonix.org/t/change-whonix-forum-software-to-discourse/1181/12
    4. I sent an email to Alex (cc'd Marcin) with a link to the article, asking him to add anything I may have missed.
  1. ...
# I added Extension:CookieWarning to the list of proposed extenisons to add to mediawiki https://wiki.opensourceecology.org/wiki/Mediawiki#Proposed
  1. I added "Cookie Notice for GDPR" to a new list of proposed wordpress plugins https://wiki.opensourceecology.org/wiki/Wordpress#Wordpress_Plugins
  1. ...
  1. I logged into backblaze to see the status there
    1. It _should_ have a subset of our backups (just /etc as we're still on the free version) demonstrating the retention policy = keeping a few day's back + the first of every month.
    2. To my great dismay, there were no files in our bucket!
    3. The logs reveal that the sudo (to de-escalate the backblaze upload binary `b2`) failed due to no tty being attached
================================================================================                                                                      
Beginning Backup Run on 20180928_072001    
...
================================================================================                                                                      
Beginning Backup Run on 20180928_091810                                                                                                               
/bin/tar: Removing leading `/' from member names                                                                                                      
																																					  
real  0m0.926s                                                                                                                                        
user  0m0.915s                                                                                                                                        
sys   0m0.066s                                                                                                                                        
/bin/tar: Removing leading `/' from member names                                                                                                      
/root/backups/sync/daily_hetzner2_20180928_091810/etc/                                                                                                
/root/backups/sync/daily_hetzner2_20180928_091810/etc/etc.20180928_091810.tar.gz                                                                      
																																					  
real  0m0.010s                                                                                                                                        
user  0m0.001s                                                                                                                                        
sys   0m0.009s                                                                                                                                        
																																					  
real  0m0.496s                                                                                                                                        
user  0m0.483s                                                                                                                                        
sys   0m0.012s                                                                                                                                        
sudo: sorry, you must have a tty to run sudo                                                                                                          
																																					  real  0m0.006s                                                                                                                                        
user  0m0.000s                                                                                                                                        
sys   0m0.003s                                                                                                                                        
================================================================================  
    1. it looks like there's no benefit to configuring suo with requiretty. In fact, rhel claimed to have removed this in 2017 https://bugzilla.redhat.com/show_bug.cgi?id=1020147#c9
    2. I commented-out the "Defaults requiretty" line from /etc/sudoers
    3. I manually ran the backup2.sh script for testing the upload of our encrypted tarball subset backup of /etc/ to backblaze. It completed successfully.
    4. I refreshed the backbalaze wui, and I confirmed that the 11.3 MB file 'daily_hetzner2_20180929_230326.tar.gpg' was present.
    5. so this is good news in that I confirmed that old backup files apparently do get deleted by our retention policy
    6. but this is bad news in that I have to wait another couple months to confirm that the retention policy actually works.
    7. as a reminder, the whole reason for the move to backblaze is that we're totally in violation of dreamhost's "unlimited" policy by storing our backups there. Earlier this year, they nicely threatened to delete all our account's data unless we did so ourselves. Then began the scramble to move everything to glacier. This occurred when we stored ~ >500G on our account.
    8. a quick check shows that we're currently using 172G (and climbing). We have 13G of backups hetzner1 + 156G of backups from hetzner2
hancock% date
Sat Sep 29 16:10:06 PDT 2018
hancock% pwd
/home/marcin_ose
hancock% du -sh 
172G	.
hancock% du -sh hetzner1
13G	hetzner1
hancock% du -sh hetzner2 
156G	hetzner2
hancock% du -sh hetzner1/*
12G	hetzner1/20180501-052002
259M	hetzner1/20180502-052001
464M	hetzner1/20180602-052001
464M	hetzner1/20180702-052001
hancock% du -sh hetzner2/*    
15G	hetzner2/20180501-072001
15G	hetzner2/20180601_072001
15G	hetzner2/20180701_072001
16G	hetzner2/20180801_072001
17G	hetzner2/20180901_072001
17G	hetzner2/20180925_072001
17G	hetzner2/20180926_072001
17G	hetzner2/20180927_072001
17G	hetzner2/20180928_072001
17G	hetzner2/20180929_072001
hancock% 
    1. so at least we _do_ still have current backups, but that could be taken away from us. Backups are an extremely high priority. Hopefully in a couple weeks we can finish the switch to backblaze.
    2. I also checked the billing section to see if Marcin added our info. Looks like he didn't..
    3. I sent an email to Marcin asking when he would be able to add our billing info to the backblaze account.
  1. ...
  1. I began to look into the possibility of using the phplist REST api to create a "subscribe to our newsletter" form on distinct websites, rather than sending users from, for example, www.opensourceecology.org to phplist.opensourceecology.org https://resources.phplist.com/plugin/restapi
    1. I am a bit concerned that the security review of the rest api applies only to the phplist4 version, and not the plugin linked above for the stable version of phplist = phplist 3 https://www.phplist.org/phplist-rest-api-security/

Fri Sep 28, 2018

  1. Catching up to some emails
    1. Catarina had an issue installing a plugin for the new microfactory site
      1. We intentionally prevent wordpress from modifying any files that can execute as code (ie: dirs where php files live, which excludes only the 'wp-content/uploads' dir) via file permissions lacking "write" for the apache server's user--thereby rendering a large number of attacks futile
      2. I installed the "duplicate page" plugin for the microfactory site for Catarina
      3. I also included basic instructions on how to do it herself using filezilla or ssh as she has all necessary permissions to do so, including links to relevant documentation pages
        1. https://wiki.opensourceecology.org/wiki/Wordpress#Why.3F
        2. https://wiki.opensourceecology.org/wiki/Wordpress#Step_2:_Make_Vhost-specific_backups
        3. https://codex.wordpress.org/Managing_Plugins#Manual_Plugin_Installation
        4. https://wiki.opensourceecology.org/wiki/Wordpress#Proper_File.2FDirectory_Ownership_.26_Permissions
      4. Moreover, I offered to provide a training session on how to install a wp plugin
    2. Marcin asked if we should remove our billing info from hetzner just to prevent them from charging us the additional 53 days after we told them to cancel hetzner1. I reiterated that we probably want to just pay it & let it be, as the cost would be greater to have to migrate our production site (hetnzer2) off hetzner entirely--especially in the long run. Hetzner, for all their faults, is still our best option imo
    3. Catarina had an issue accessing a URL on the wiki, getting "This site can't be reached" in Chrome 67 and "Secure Connection Failed" in Firefox
      1. The issue was when trying to access http://opensourceecology.org/wiki/File:OSE_identity_guidelines_-_logo_color_applications_v1-1.svg#
      2. The above link *should* redirect to https://wiki.opensourceecology.org/wiki/File:OSE_identity_guidelines_-_logo_color_applications_v1-1.svg
      3. I confirmed that the redirect worked in my firefox browser and in curl
      4. I responded that I couldn't reproduce, but I went ahead and fixed the wiki in question, making external links to files on the wiki relative links using the actual file syntax https://wiki.opensourceecology.org/index.php?title=Jean-Baptiste_Log_-_2014&type=revision&diff=178519&oldid=149888
    4. Marcin asked about Discourse as a possible forum platform
      1. Interestingly, I had just looked into Discourse last week, as it was what phplist uses
      2. I told Marcin my $0.02 about how I really, really liked DIscourse from a user perspective, but the big negatives are:
        1. They run on Ruby, and can only be installed via docker containers--they openly admit to how hard it would be to install otherwise
        2. They intentionally don't support older devices. for example, they only support IE 11+, which means that some fabricator with an old windows xp desktop in their machine shop wouldn't be able to access the discussions. I generally don't like that, and we should seriously consider if that's how we want to align..
    5. Marcin encountered a 403 when attempting to embed an eventzilla html snippet
      1. URL & content is

https://wiki.opensourceecology.org/index.php?title=Atchison_Public_Library_Flyer&action=submit


<html><div id="eventzilla-iframe"></div><script
type='text/javascript'>window.onload = function() {var iframe =
document.createElement('iframe');iframe.id="ifeventzilla";iframe.style.width
= "100%";iframe.style.height =
"100%";iframe.frameBorder="0";iframe.src =
"https://www.eventzilla.net/web/event_embedd.aspx?eventid=2138715782";var
evntzilladiv=document.getElementById('eventzilla-iframe');evntzilladiv.appendChild(iframe);};</script></html>
      1. I whitelisted 958413, XSS
    1. Marcin got back to me about the privacy policy. He said he's in favor of a single/monolithic PP for all our sites. Therefore, I'll begin to add newsletter-specific stuff to our wiki PP for phplist. Then I'll just link to our wiki PP from phplist.
      1. unfortunately, it looks like I _can't_ put a link in the checkbox where the user accepts our PP/ToS. I asked about this last week, but still got no response https://discuss.phplist.org/t/link-to-tos-in-attribute/4428
        1. I created a bug report on phplist's mantis. I marked it as urgent since gdpr _is_ urgent--and I can't imagine it would fly to ask people to consent to a document that we don't even provide to them! https://mantis.phplist.org/view.php?id=19436
  1. ...
  1. Sam Tuke got back to me about my request to edit the phplist maual
    1. He asked for clairification, noting that I could register for the wiki here: https://resources.phplist.com/start?do=register
    2. And that the manual is a static site in a private Git repo.
    3. I asked him for access to the private git repo, and I registered for the wiki.
Yes, I would greatly appreciate access to the private Git repo so I can contribute to the manual.

Specifically, I’d like to add a section to the “Troubleshooting Techniques” page describing the error_reporting() function.

 * https://discuss.phplist.org/t/troubleshooting-techniques-manual-chapter-feedback-and-discussion/234

Thank you!
    1. so this is complicated. it appears that there's 3x places for documentation of the phplist project:
      1. developer documentation https://www.phplist.org/development/
      2. wiki documentation https://resources.phplist.com/documentation/start
        1. in the wiki, I found a great list of all the possible config file variables https://resources.phplist.com/system/config/all
        2. also, it looks like there's a method to attach a logo file, rather than linking it (so that mail clients blocking hyperlinked images still see the logo) https://resources.phplist.com/system/logo
        3. found a short doc describing how to setup the cli (may be useful to process the queue in a cron) https://resources.phplist.com/system/commandline
        4. interestingly, I found a guide to block access to the phplist admin pages using an apache rewrite rule. There's no info on how to bypass it for the _real_ admin, though https://resources.phplist.com/admin_pages_access_control
      3. and the manual https://www.phplist.org/manual/
  1. ...
  1. I began to sketch a draft guide to hardening a phplist install
    1. I also sent an email to Xheni (a sec-focused dev at phplist) asking if she has any input on what the guide should include, specifically any of the numerous config.php options https://resources.phplist.com/system/config/all

Tue Sep 18, 2018

  1. Sam Tuke asked me to sign their Contributor License Agreement for my contribution to the phplist project https://github.com/phpList/phplist3/pull/403#issuecomment-422599653
    1. I reviewed the CLI. It says that I waive my copyrights so that the project can copyleft them under the AGPL. Sounds good to me. I "signed it" via the link, requiring a read-only oauth link to my github.com account https://phplist.com/cla
  2. I checked again on the statistics of my test campaign. It now shows 2x unique opens from michael@opensourceecology.org & phplist@opensourceecology.org. Indeed, I opened it from both of those. Interesting, it also shows a timestamp for each time it was opened. I have like 20 opens for my personal account, and phplist opened it once.
  1. Our phplist install's attributes are empty. There are some silly default attributes. I clicked on the "predefined defaults" button. The one mentioned on the GDPR page of the manual is probably the one labeled "Terms of Service". I enabled that attribute. https://www.phplist.org/manual/ch048_gdpr.xhtml
  2. that added a required checkbox stating "I agree to the terms of service of phplist.opensourceecology.org" to the subscriber page https://phplist.opensourceecology.org/lists/?p=subscribe
  3. found a meta ticket in phplist's manits for gdpr changes https://mantis.phplist.org/view.php?id=19032
  4. I tested an attempt to subscribe without checking the box. In my browser, I got what looked like a JS alert() stating "The following field is required: I agree to the therms of service of phplist.opensourceecology.org"
  5. I also tested it in links (text-only cli browser) to ensure it didn't require JS. Indeed, submitting failed. The page re-loaded with the same error message at the top of the form.
  6. when I checked the box, it went through.
  7. I went to check the subscribers list, and I found the attribute titled "I agree to the terms of service of phplist.opensourceecology.org". Unfortunately, the value for this attribute was just a (checked) checkbox. According to GDPR, it should be a timestamp.
  8. ugh, I tried to make the text actually link to our Privacy Policy, but apparently attributes can't include html? It just strips it out
    1. I started a thread on discuss.phplist.org about this issue and the suggested workaround https://discuss.phplist.org/t/link-to-tos-in-attribute/4428/2
    2. somehwhat unrelated, but the forum/mailing list/longform chat solution that phplist uses at discuss.phplist.org runs on discourse. It's pretty nice, except that it's a ruby app. Anyway, as we're currently forum-less, I added a link to it as an alternative to our Vanilla Forums install https://wiki.opensourceecology.org/wiki/OSE_Forum
      1. uy, it only installs via docker too. And is not very backwards-compatible. Probably requires JS too. Hard sell. https://github.com/discourse/discourse/blob/master/docs/INSTALL.md
      2. bad, bad bad install instruction!!!! https://github.com/discourse/discourse/blob/master/docs/INSTALL-cloud.md
wget -qO- https://get.docker.com/ | sh
  1. I found a couple more guides on how to integrate the subscriber page into your site:
    1. https://www.phplist.com/knowledgebase/website-integration/
    2. https://wharfedalefestival.co.uk/dr/index.php/phplist-menu/124-phplist-how-to-include-a-custom-subscribe-form
    3. so it looks like we have 2x options. We could extract some header & footer html content from our sites, import them into phplist, and just redirect people to there for subscribing. Or we could just drop the forms pointed to the subscription page on phplist. I see faults & maintenance issues in both. Hmm.

Mon Sep 17, 2018

  1. our statuscake monthly report came in. We have 100% uptime for 2 months in-a-row. I don't think we've ever hit that before. Most downtimes were caused by hetzner, and they've been less of an issue since we migrated to hetnzer2
  1. Sam Tuke accepted my changes for better error handling of the random_compat error into the master branch of phplist :) https://github.com/phpList/phplist3/commit/779b90cd0c9fd4d6d12531eeec7b1b1d45e36ee1
  1. I moved our phplist config from port 4443 to port 443 by changing the nginx & varnish configs.
  2. I updated the phplist campaign template, removing the ":4443" from the images
  3. I sent the campaign test again, and the images showed up fine in gmail this time!
  4. now I test the old email, and the images load on there now as well. I guess the facebook just ignores ports in their image proxy thing. Anyway, I guess we'll just have to use 443 here. I _really_ want 2FA, though.
  5. the default phplist footer is basically illegible due to the font on the background color.
    1. I checked that our "unsubscribe" link and the footer's "opt-out completely" link are identical
    2. I checked that our "Update preferences" and "Edit your subscription" links match the footer's "preferences page" link
    3. I checked that our "Forward" link and the footer's "forward page" links match. They didn't!
      1. the footer uses [FORWARDURL]
--
	<div class="footer" style="text-align:left; font-size: 75%;">
	  <p>This message was sent to [EMAIL] by [FROMEMAIL]</p>
	  <p>To forward this message, please do not use the forward button of your email application, because this message was made specifically for you only. Instead use the <a href="[FORWARDURL]">forward page</a> in our newsletter system.<br/>
	  To change your details and to choose which lists to be subscribed to, visit your personal <a href="[PREFERENCESURL]">preferences page</a><br/>
	  Or you can <a href="[UNSUBSCRIBEURL]">opt-out completely</a> from all future mailings.</p>
	</div>
      1. I confirmed that my template also uses "[FORWARDURL]". Hmm.
    1. ah, interesting, there's a link on the image and a link on the text. I have to update both to be "[FORWARDURL]".
    2. also interesting, the proper link sends me to a 404. on our site anyway https://phplist.opensourceecology.org/lists/?p=forward&mid=4&uid=43d035a41d2d893d21255158de6f9831
    3. further digging found that the forward link will 404 when sending a test email rather than actually sending out the campaign https://discuss.phplist.org/t/forward-and-profile-links-stopped-working/1323
  1. I decided to leave the footer as a nice plaintext paragraph with all the essential info for users without html.
    1. I modified it a bit; first I changed the color to be white
    2. to make changes to the footer, I had to whitelist modsec false-positives:
      1. 960915, protocol violation
      2. 200003, Multipart parser detected a possible unmatched boundary.


  1. I went through the Config -> Settings configs (in the DB), and made some changes as needed
    1. changed "Person in charge of this system (one email address)" from phplist@opensourceecology.org to "privacy@opensourceecology.org"
    2. it looks like the entry for 4443 isn't there anymore since I reinitialized the db last week
  1. I went to add marcin & Catarina as subscribers to our test list, but I got a 403 after clicking "Subscribers" -> "Add a new subscriber"
    1. whitelisted 981172, sqli
  2. I noticed that our subscriber list has no attributes. For example, we don't have a field to store the first name, last name, "how you found out about us?", interests, etc. We didn't have name info before, so I'll leave it at just the email for now. We should decide if we want to collect this or not later,
  3. I manually changed the "Is this subscriber confirmed (1/0)" from 0 to 1 for Marcin & Catarina.
  1. I sent a test campaign to the 'test' list, which is myself, Catarina, and Marcin.
    1. I'm not getting forwarded the emails from phplist@opensourceecology.org
      1. I confirmed that Marcin & I are both in the drop-down menu, but "disable forwarding" is selected in the gmail for phplist@opensourceecology.org
      2. I added the filters to forward mail for size > 0 to both myself & Marcin (2x rules)
    2. I pulled up the statistics. It still shows 0 opens, even though I opened it. hmm. I guess that's being blocked by the gmail proxy?
    3. I verified that the forward link works now
  1. interestingly, I discovered that phplist.com straight-up 403 Forbids requests to _their_ admin section https://www.phplist.com/lists/admin/
  1. I'm getting closer and closer to the need to write a GDPR-compliant Privacy Policy for phplist
    1. there's still some technical changes I need to make to phplist, but I may as well keep the ball rolling on this
    2. I responded to an old email Marcin sent me about my first attempt to prepare a Privacy Policy = the one for our wiki https://wiki.opensourceecology.org/wiki/Open_Source_Ecology:Privacy_policy
    3. Marcin had asked me why we mention storing anonymous editor's IP addresses when we don't permit anonymous edits. I told him I left it in intentionally as it's the default functionality of mediawiki, and I wanted to cover all bases. Indeed, when we migrated the site earlier this year, we discovered after some time that anonymous edits were enabled by accident. Better to have it in the policy IMO.
    4. Marcin asked if we could add some section stating that OSE would not automatically send our users' info to a 3rd party State upon subpoena without first investigating the legitimacy of the request and resisting if at all possible. WMF's current policy had a great wording with this intent, so I added a section on it here https://wiki.opensourceecology.org/wiki/Open_Source_Ecology:Privacy_policy#For_Legal_Reasons
    5. Moreover, I asked Marcin if we intended to have (a) a single monolithic PP for all our sites or if we wanted to maintain distinct, site-specific PPs. I voted for a single PP. If he says a single PP, then I guess the next step is to try to hack at the above wiki PP and make it apply to phplist newsletters. It already has the GDPR user information request/rectify/removal/etc bits, so we'd just have to add a section about "by agreeing to this privacy policy, you are consenting for us to send you information that we believe you have a legitimate interest in receiving" etc
  1. ...
  1. I'm still pretty concerned about the overall sec of phplist. Simply gaining access to Marcin's email (or anyone who's an admin on phplist) would permit them to change their password on phplist, login, and steal our DB containing PII on our subscribers. The fix is to use 2FA, but phplist doesn't support it yet
  2. I logged into the g suite admin to check the "password strength" bar and length of Marcin's password. His "password strength" bar is at its max, and the length is reasonably long. That's good, at least.
  1. ..
  1. I was clicking around the phplist interface to see what/how we can make it "fit" in with our OSE sites. Sadly, I discovered some intentional information leakage of the phplist version in connect.php https://github.com/phpList/phplist3/blob/e1c88d6dd2834c878a392f091f9444ee99f1cc65/public_html/lists/admin/connect.php#L283-L288
  2. I created an issue on the phplist manits about this https://mantis.phplist.org/view.php?id=19419
    1. hopefully they respond positively, and I'll just submit a PR with the version removed from the bottom.

Sat Sep 15, 2018

  1. samtuke (ceo at phplist) asked me to create a pull request for better error handling to catch the random_compat issue I documented on my blog https://discuss.phplist.org/t/common-installation-errors-manual-chapter-feedback-and-discussion/217/3
  2. I did so https://github.com/phpList/phplist3/pull/403
  3. I sent a distinct message to samtuke asking what would be required to gain write permissions to their documentation so I could contribute. For example, adding a hardening guide.
  4. I added my email address to the 'test' subscriber list
  5. I went to setup the campaign template based on my superficial html hacks of the old template from earlier via "Campaigns" > "Manage campaign templates"
  6. I simply edited the existing html file, adding the [CONTENT] place-holder so phplist knows where to put the body of the message that the editor of the campaign types
  7. I sent a test campaign, but I had to fix a few modsecurity false-positives:
    1. 950911, general attack
    2. 981231, sqli
    3. 981248, sqli
    4. 981245, sqli
    5. 973338, xss
    6. 973304, xss
    7. 973306, xss
    8. 973333, xss
    9. 973344, xss
  8. it went out, but the template was absent! wtf?
  9. I went back to teh "Manage campaign templates" page and made the one I uploaded earlier the "CAMPAIGN DEFAULT".
  10. I went to send another campaign. This time on the second step (tab 2 = Format) I noticed a "Use template" drop-down menu. Now the "ose_201809" template is selected by default.
  11. Finally, this one came in. It looks like shit in thunderbird w/ plaintext of course, but I can at least see the outline of the images by their alt texts.
  12. I opened it in google. This looks better-ish. The background colors are all there, but the images are all broken.
  13. looks like the ose logo points to 'http://./osemail.20180910b_files/logo-splash.145356.145846.png' that's not going to work.
  14. so now we have the choice of attaching these images to the email (expensive & "good") vs linking them to our website. Linking them means tracking & analytics.
  15. I did some searches on the ethics of tracking pixels. These are ubiquitously used by spammers, white & black hat hackers, and hopefully-good-intentioned email newsletter authors to collect data on their subscribers. I'm torn.
    1. https://www.cyberscoop.com/pixel-tracking-hacking-check-point/
    2. google has a guide to tracking users with an image over email https://developers.google.com/analytics/devguides/collection/protocol/v1/email
  16. speaking of tracking, I need to fucking remove all these facebook & twitter beacons from the template
  17. to properly upload images, I need to figure out these ideal permissions
    1. according to config_extended.php, UPLOADIMAGES_DIR is '/uploadimages/'
[root@hetzner2 phplist.opensourceecology.org]# grep -ir 'UPLOADIMAGES_DIR' *
public_html/lists/config/config_extended.php:define('UPLOADIMAGES_DIR', 'uploadimages');
public_html/lists/config/config_extended.php://define("UPLOADIMAGES_DIR","images/newsletter/uploaded");
public_html/lists/admin/fckphplist.php:    $imgdir = $_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/';
public_html/lists/admin/fckphplist.php:    if (defined('UPLOADIMAGES_DIR')) {
public_html/lists/admin/fckphplist.php:        $imgdir = $_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/';
public_html/lists/admin/fckphplist.php:                +'<?php echo $GLOBALS['pageroot'].'/'.UPLOADIMAGES_DIR.'/' ?>'
public_html/lists/admin/init.php:if (!defined('UPLOADIMAGES_DIR')) {
public_html/lists/admin/init.php:    define('UPLOADIMAGES_DIR', 'images');
public_html/lists/admin/plugins/fckphplist/config.php:  if (defined('UPLOADIMAGES_DIR')) {
public_html/lists/admin/plugins/fckphplist/config.php:    $imgdir = $_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/';
public_html/lists/admin/plugins/fckphplist/config.php:FCKConfig.ImagePath = document.location.protocol + '//' + document.location.host +'<?php echo $GLOBALS["pageroot"].'/'.UPLOADIMAGES_DIR.'/'?>'
public_html/lists/admin/plugins/fckphplist/fckeditor/editor/filemanager/connectors/phplist/config.php:if (!defined('FCKIMAGES_DIR') && !defined('UPLOADIMAGES_DIR')) {
public_html/lists/admin/plugins/fckphplist/fckeditor/editor/filemanager/connectors/phplist/config.php:} elseif (defined('UPLOADIMAGES_DIR')) {
public_html/lists/admin/plugins/fckphplist/fckeditor/editor/filemanager/connectors/phplist/config.php:  $imgdir = $_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/';
public_html/lists/admin/plugins/fckphplist/fckeditor/editor/filemanager/connectors/phplist/config.php:  $Config['UserFilesPath'] = '/'.UPLOADIMAGES_DIR.'/' ;
public_html/lists/admin/plugins/CKEditorPlugin.php:                'uploadURL' => sprintf('%s://%s/%s', $public_scheme, $website, ltrim(UPLOADIMAGES_DIR, '/')),
public_html/lists/admin/plugins/CKEditorPlugin.php:                $kcUploadDir = rtrim($_SERVER['DOCUMENT_ROOT'], '/') . '/' . trim(UPLOADIMAGES_DIR, '/');
public_html/lists/admin/plugins/CKEditorPlugin.php:        $this->kcEnabled = defined('UPLOADIMAGES_DIR') && UPLOADIMAGES_DIR !== false;
public_html/lists/admin/sendemaillib.php:    if (defined('UPLOADIMAGES_DIR') && UPLOADIMAGES_DIR) {
public_html/lists/admin/sendemaillib.php:        $dir = str_replace('/', '\/', UPLOADIMAGES_DIR);
public_html/lists/admin/sendemaillib.php:            '<img\\1src="'.$GLOBALS['public_scheme'].'://'.$baseurl.'/'.UPLOADIMAGES_DIR.'\\2>',
public_html/lists/admin/class.phplistmailer.php:        if (defined('UPLOADIMAGES_DIR')) {
public_html/lists/admin/class.phplistmailer.php:                is_file($_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/image/'.$localfile)
public_html/lists/admin/class.phplistmailer.php:                || is_file($_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/'.$localfile)
public_html/lists/admin/class.phplistmailer.php:        if (defined('UPLOADIMAGES_DIR')) {
public_html/lists/admin/class.phplistmailer.php:                } elseif (is_file($_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/image/'.$localfile)) {
public_html/lists/admin/class.phplistmailer.php:                    SaveConfig('uploadimageroot', $_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/image/', 0, 1);
public_html/lists/admin/class.phplistmailer.php:                    return base64_encode(file_get_contents($_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/image/'.$localfile));
public_html/lists/admin/class.phplistmailer.php:                } elseif (is_file($_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/'.$localfile)) {
public_html/lists/admin/class.phplistmailer.php:                    SaveConfig('uploadimageroot', $_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/', 0, 1);
public_html/lists/admin/class.phplistmailer.php:                    return base64_encode(file_get_contents($_SERVER['DOCUMENT_ROOT'].'/'.UPLOADIMAGES_DIR.'/'.$localfile));
[root@hetzner2 phplist.opensourceecology.org]#
    1. but when I was uploading the template, I got an error for a different image directory
 Image browsing is not available because directory "/var/www/html/phplist.opensourceecology.org/public_html/images" does not exist or is not writeable. How to resolve this problem.
  1. I went ahead and explicitly set the image directory to be "public_html/images"
  2. I determined the following hardened permissions to be ideal & documented it on the wiki https://wiki.opensourceecology.org/wiki/Phplist#Proper_File.2FDirectory_Ownership_.26_Permissions
vhostDir="/var/www/html/phplist.opensourceecology.org"

chown -R not-apache:apache "${vhostDir}"
find "${vhostDir}" -type d -exec chmod 0050 {} \;
find "${vhostDir}" -type f -exec chmod 0040 {} \;

chown not-apache:apache-admins "${vhostDir}/config.php"
chmod 0040 "${vhostDir}/config.php"

[ -d "${vhostDir}/public_html/uploadimages" ] || mkdir "${vhostDir}/public_html/uploadimages"
chown -R apache:apache "${vhostDir}/public_html/uploadimages"
find "${vhostDir}/public_html/uploadimages" -exec chmod 0660 {} \;
chmod 0770 "${vhostDir}/public_html/uploadimages"
  1. I uploaded the 'logo-splash.145356.145846.png' to /var/www/html/phplist.opensourceecology.org/public_html/uploadimages/osemail_header_logo_2002.png' https://phplist.opensourceecology.org:4443/uploadimages/osemail_header_logo_2012.png
  2. I changed the "Edit your subscription" link in the template to be the proper phplist placeholder = [PREFERENCESURL] https://www.phplist.org/manual/ch023_advanced-templating.xhtml
  3. I also did this for the unsubscribe link using [UNSUBSCRIBEURL]
  4. I removed the jquery script include. It's bad enough that this shit is html, but I do support the idea of showing pictures of our latest tractor, extreme build, etc. There's no fucking reason for JS in an email.
  5. I went through and removed all the external links, replacing them with the text 'url_scrubbed'
  6. I replaced the URL for the "Web Version" to "TODO" we'll need to test the plugin for this purpose & test it also should discuss with Marcin about this whole concept. It would enable "liking" the newsletter, but maybe we should just write a blog post corresponding with every newsletter and use _that_ instead? It's more complicated for the newsletter author, but it makes much more sense. Anyway, marking as TODO.
  7. I replaced the links for "like" & "tweet" with TODO as well
  8. I uploaded the twitter icon to our uploadimages dir https://phplist.opensourceecology.org:4443/uploadimages/tweet-glyph.png
  9. I uploaded the fb "like" icon to our uploadimages dir https://phplist.opensourceecology.org:4443/uploadimages/like-glyph.png
  10. I uploaded the "forward to a friend" icon to our uploadimages dir https://phplist.opensourceecology.org:4443/uploadimages/forward-glyph.png
  11. I re-uploaded the template
    1. I whitelisted a modsec rule = 981257, sqli
    2. 981240, sqli
    3. 981243, sqli
    4. 973336, xss
    5. 958057, xss
    6. 958006, xss
    7. 958049, xss
    8. 958051, xss
    9. 958056, xss
    10. 958011, xss
    11. 958039, xss
    12. 973301, xss
    13. 973302, xss
    14. 973308, xss
    15. 973314, xss
    16. 973331, xss
    17. 973315, xss
    18. 973330, xss
    19. 973327, xss
    20. 973322, xss
    21. 973348, xss
    22. 973321, xss
    23. 973335, xss
    24. 973334, xss
    25. 973347, xss
    26. 973332, xss
    27. 973316, xss
    28. 200004, ??
  12. I sent a new campaign from the updated template.
    1. In gmail the images still didn't load! I confirmed that the link is correct
    2. I went into thunderbird, enabled images to be displayed, and it looked fine. wtf is wrong with gmail?
    3. for some reason, the only image that came through was the phplist logo in the footer. hmm..
    4. so gmail doesn't link directly to the image, they do this weird proxy thing:
      1. https://ci5.googleusercontent.com/proxy/8FAbvJAr13AzwPxJEDR-C_risW8WqPu7HeTBehW-NCln6Q3T7ekiEaXGs4plOPY-GA5scZUuZTmr9UtjfeH3nrjcLqa5lKme6xbtiTOFnl3irYSLCdEpOL47Xjg=s0-d-e1-ft#https://phplist.opensourceecology.org:4443/uploadimages/like-glyph.png
      2. https://gm1.ggpht.com/glxzSZgoelvUDXXjs6QRjgij8TlMu5IxKnVTlRi9BYKiqG5sxEdv0XN0JDWvWWmYakbfetcYvO-qCJL0Ezs6z-5d_lqPBl-DZFScrNDYGNznQu6V6UufGvLJmYt_M8OCGTQ6Tr96c3jB6fb0qrO9m2Xwsw7TaoZhy17M4tBnA4mtpplttr_VnyHcBLn69v02dK9fjlm7sRBYvluvZ9iMIBu7UoGECpeB316AObRBj_RPq4es5ymtxYZIumcCMCfJXwMCaxKHh-5osXoMSPtvCXMe0_r7OP6k8p_Ww6MxB9rDYdUMIVoJI-Tl7aur4mplEiCa_dmvuv0m4pSpd8FFezSlNGvpS1KMOCJ8pW8KWRY_hHaHt4XjQcIjtwORU9Gq0KM2Dquw4adqLol52kwX6vYViavMHa86pf2jAvBnGUKtlpk-Zt_6d2zMn3A-Uq22uRmTX-q2UEC34M-QvRfB46_OTJyKSZ5u3J02mvaA_1WP75z-GQwxIm3NMgbuCCQv5OCijZg15V09H5FTL98bzOF86KjmC972e61T8-o7zHIQZgZuE7FB7EkiMWSNYOFUWvjHdvf08MoAreqRNduQO91aWwRr4V6BF0bTzAkXm89jygL4EITSmvWxbd-kpGSDmje6IQdxGobcLXC5-wJOViUuArD62fYyWoo5f1LykmKrTFoq63QFnxzbxTknv_0bh9QOZQndgxK8IYRI=s0-l75-ft-l75-ft
    5. it's possible that the proxy thing just doesn't work with the 4443 port. I may just have to change back to 443. I wish none of this had to be public. Another option is to put the images in a distinct domain? I should probably just use 443.
  13. Updated the osemail newsletter html template stored on the wiki https://wiki.opensourceecology.org/wiki/File:OSEmail_phplist.tar.gz


Thr Sep 13, 2018

  1. I logged into the phpadmin site for the first time in a while. I used the username 'admin'.
    1. I don't think there's a max password length per my paranoia earlier this week. Actually. I think the issue was that I was trying to login with the email address (michael@opensourceecology.org), but I should have used the "Login Name" = 'admin'
    2. I started poking around at the settings in the phplist wui, and I found that there's now a *lot* of email addresses set to "michael@opensourceecology.org". I should probably change all of this to 'phplist@opensourceecology.org' to forward to me and marcin
    3. I checked out the plugins section. True, the default install was shipped with a few plugins, but they're not all enabled by default. Per default:
      1. captcha plugin is installed, but disabled
      2. CKEeditor plugin is installed & enabled
      3. CommonPlugin is installed, but disabled
      4. fckphplist is installed, but disabled
      5. inviteplugin is installed, but disabled
      6. SegmentPlugin is installed, but disabled
      7. subjectLinePlaceholdersPlugin is installed, but disabled
  2. I really want to do a fresh install to change the admin to user = phplist@opensourceecology.org, but I found a button under "System" -> "Initialize the database" -> "Force Initialization (will erase all data!)".
    1. I pressed this button. It brought up a blank page. Concerning..
    2. I cliked around on the UI, but was told that I've been logged out. I just went to "/lists/admin/", and I was prompted with a form to "Initialize the Database".
    3. This time, I entered both "my name" and the "organization name" as 'Open Source Ecology'. I used the email address 'phplist@opensourceecology.org'
    4. I created


Mon Sep 12, 2018

  1. reading through the "security" section of the extended config file, I found that--by default--anyone can update another users's info by re-subscribing with the existing email address. Not sure if this is an issue for GDPR or not, but there is a setting to disable this "functionality" https://mantis.phplist.org/view.php?id=15557
  2. so I think the actual password verification occurs in public_html/lists/admin/phpListAdminAuthentication.php

Sun Sep 11, 2018

  1. I couldn't login to phplist using my password stored in keepass. I reset it, but I still couldn't login. This probably means that the password I'm storing in keepass is too long because phplist is cutting it off at some max length.
  2. yikes, I just discovered that phplist was storing the admin password in plaintext in 2012. They claimed to have "fixed it" by storing an md5() of the password in 2.11.7, despite outcrys that they should be using ShA or bcrypt https://forums.phplist.com/viewtopic.php?t=38265
  3. I searched through all the documentation to find how they store the password, but it's not mentioned anywhere.
  4. I found this stub of a README on their security in their code base https://github.com/phpList/phplist3/blob/master/doc/README.security
  5. I found this bug, which claims it was fixed in 2.11.0, but it doesn't mention how it was fixed. I don't think we'd consider md5 hashes of the admin password to gain access to all our subscriber's PII sufficiently secure https://mantis.phplist.org/view.php?id=12822
  6. it looks like 2.11.0 was released on 2005-03-08 https://mantis.phplist.org/changelog_page.php
  7. unfortunately the changelogs only go back to 2011-11 https://mantis.phplist.org/plugin.php?page=Source/list&id=29&offset=109
  8. I began digging throught the sourcecode, and I discovered that many plugins are already installed. I guess they just get packaged with phplist's main release
[root@hetzner2 phplist.opensourceecology.org]# ls -lah public_html/lists/admin/plugins
total 136K
d---r-x---  8 not-apache apache 4.0K May 15 21:41 .
d---r-x--- 16 not-apache apache 4.0K Aug 23 01:54 ..
d---r-x---  2 not-apache apache 4.0K May 15 21:40 CaptchaPlugin
----r-----  1 not-apache apache 8.0K May 15 21:40 CaptchaPlugin.php
d---r-x---  3 not-apache apache 4.0K May 15 21:41 CKEditorPlugin
----r-----  1 not-apache apache  15K May 15 21:41 CKEditorPlugin.php
d---r-x---  3 not-apache apache 4.0K May 15 21:40 Common
d---r-x---  7 not-apache apache 4.0K May 15 21:40 CommonPlugin
----r-----  1 not-apache apache 3.8K May 15 21:40 CommonPlugin.php
----r-----  1 not-apache apache  35K May 15 21:41 COPYING.txt
d---r-x---  3 not-apache apache 4.0K May 15 21:40 fckphplist
----r-----  1 not-apache apache 2.8K May 15 21:40 fckphplist.php
----r-----  1 not-apache apache  462 May 15 21:40 .htaccess
----r-----  1 not-apache apache 3.9K May 15 21:41 inviteplugin.php
d---r-x---  3 not-apache apache 4.0K May 15 21:41 SegmentPlugin
----r-----  1 not-apache apache  19K May 15 21:41 SegmentPlugin.php
----r-----  1 not-apache apache 5.3K May 15 21:41 subjectLinePlaceholdersPlugin.php
[root@hetzner2 phplist.opensourceecology.org]# 
  1. looks like the default is still md5
[root@hetzner2 phplist.opensourceecology.org]# grep -C 5 'ENCRYPTION_ALGO' public_html/lists/admin/init.php 
if (ASKFORPASSWORD && defined('ENCRYPTPASSWORD') && ENCRYPTPASSWORD) {
	#https:mantis.phplist.com/view.php?id=16787
	// passwords are encrypted, so we need to stick to md5 to keep working

	//# we also need some "update" mechanism to handle an algo change
	if (!defined('ENCRYPTION_ALGO')) {
		define('ENCRYPTION_ALGO', 'md5');
	}
}

if (ASKFORPASSWORD && !defined('ENCRYPTPASSWORD')) {
	//# we now always encrypt
	define('ENCRYPTPASSWORD', 1);
}
if (!defined('ENCRYPTPASSWORD')) {
	//# old method to encrypt, used to be with md5, keep like this for backward compat.
	if (!defined('ENCRYPTION_ALGO')) {
		define('ENCRYPTION_ALGO', 'md5');
	}
//  define("ENCRYPTPASSWORD",0);
}

if (!defined('ENCRYPTION_ALGO')) {
	if (function_exists('hash_algos') && in_array('sha256', hash_algos())) {
		define('ENCRYPTION_ALGO', 'sha256');
	} else {
		define('ENCRYPTION_ALGO', 'md5');
	}
}
if (!defined('HASH_ALGO')) {
	// keep previous hashalg. @@TODO force an update of hash method, many may still be on md5.
	if (defined('ENCRYPTION_ALGO')) {
		define('HASH_ALGO', ENCRYPTION_ALGO);
	} elseif (function_exists('hash_algos') && in_array('sha256', hash_algos())) {
		define('HASH_ALGO', 'sha256');
	} else {
		define('HASH_ALGO', 'md5');
	}
[root@hetzner2 phplist.opensourceecology.org]# 
  1. Looks like we already set the hash algo to sha256 for passwords in config.php
[root@hetzner2 phplist.opensourceecology.org]# grep -C3 'HASH_ALGO' config.php 
// choose the hash method for password
// check the extended config for more info
// in most cases, it is fine to leave this as it is
define('HASH_ALGO', 'sha256');
[root@hetzner2 phplist.opensourceecology.org]# 
  1. I also found a command that showed all possible hash algorithms
[root@hetzner2 phplist.opensourceecology.org]# grep -B10 'HASH_ALGO' public_html/lists/config/config_extended.php 
// if you use passwords, they will be stored hashed
// set this one to the algorythm to use. You can find out which ones are
// supported by your system with the command
// $ php -r "var_dump(hash_algos());";
// "sha256" is fairly common on the latest systems, but if your system is very old (not a good idea)
// you may want to set it to "sha1" or "md5"
// if you use encrypted passwords, users can only request you as an administrator to
// reset the password. They will not be able to request the password from
// the system
// if you change this, you may have to use the "Forgot password" system to get back in your installation
define('HASH_ALGO', 'sha256');
[root@hetzner2 phplist.opensourceecology.org]# 
  1. And here's our options
[maltfield@hetzner2 ~]$ php -r "var_dump(hash_algos());";
Command line code:1:
array(46) {
  [0] =>
  string(3) "md2"
  [1] =>
  string(3) "md4"
  [2] =>
  string(3) "md5"
  [3] =>
  string(4) "sha1"
  [4] =>
  string(6) "sha224"
  [5] =>
  string(6) "sha256"
  [6] =>
  string(6) "sha384"
  [7] =>
  string(6) "sha512"
  [8] =>
  string(9) "ripemd128"
  [9] =>
  string(9) "ripemd160"
  [10] =>
  string(9) "ripemd256"
  [11] =>
  string(9) "ripemd320"
  [12] =>
  string(9) "whirlpool"
  [13] =>
  string(10) "tiger128,3"
  [14] =>
  string(10) "tiger160,3"
  [15] =>
  string(10) "tiger192,3"
  [16] =>
  string(10) "tiger128,4"
  [17] =>
  string(10) "tiger160,4"
  [18] =>
  string(10) "tiger192,4"
  [19] =>
  string(6) "snefru"
  [20] =>
  string(9) "snefru256" 
  [21] =>
  string(4) "gost"
  [22] =>
  string(11) "gost-crypto"
  [23] =>
  string(7) "adler32"   
  [24] =>
  string(5) "crc32"
  [25] =>
  string(6) "crc32b"
  [26] =>
  string(6) "fnv132"
  [27] =>
  string(7) "fnv1a32"   
  [28] =>
  string(6) "fnv164"
  [29] =>
  string(7) "fnv1a64"   
  [30] =>
  string(5) "joaat"
  [31] =>
  string(10) "haval128,3"
  [32] =>
  string(10) "haval160,3"
  [33] =>
  string(10) "haval192,3"
  [34] =>
  string(10) "haval224,3"
  [35] =>
  string(10) "haval256,3"
  [36] =>
  string(10) "haval128,4"
  [37] =>
  string(10) "haval160,4"
  [38] =>
  string(10) "haval192,4"
  [39] =>
  string(10) "haval224,4"
  [40] =>
  string(10) "haval256,4"
  [41] =>
  string(10) "haval128,5"
  [42] =>
  string(10) "haval160,5"
  [43] =>
  string(10) "haval192,5"
  [44] =>
  string(10) "haval224,5"
  [45] =>
  string(10) "haval256,5"
}
[maltfield@hetzner2 ~]$ 
  1. On the positive, they do have a CHECK_SESSIONIP that makes sure the IP address doesn't change mid-session. That's generally good for stealing a session cookie, though I certainly change IPs mid-session a lot due to VPN.
// to increase security the session of a user is checked for the IP address                                                                                           
// this needs to be the same for every request. This may not work with                                                                                                
// network situations where you connect via multiple proxies, so you can                                                                                              
// switch off the checking by setting this to 0                                                                                                                       
define('CHECK_SESSIONIP', 1);  	

Sat Sep 10, 2018

  1. I didn't hear back from Marcin from my email asking him if reducing the 2-column OSEmail template down to 1-column.
  2. I began tweaking the html of this file. I uploaded the old & current versions to the wiki. Note that the wiki prevents uploading html files (for good reason), so I first compressed them into a .tar.gz for the upload. It's just a 1-file tar "bomb" to mask the mime type & contents.
    1. I uploaded the original here https://wiki.opensourceecology.org/wiki/File:OSEmail_5.0.tar.gz
    2. I uploaded the new template here https://wiki.opensourceecology.org/wiki/File:OSEmail_phplist.tar.gz
  3. so the existing osemail template includes a facebook "like" button, which doesn't really make sense unless we make the html content of the message publicly hosted somewhere
    1. we could link to 'https://www.facebook.com/plugins/like.php?href=http://www.facebook.com/yourpagename' as a general "like our page" link https://stackoverflow.com/questions/9111354/how-to-add-a-facebook-like-button-to-email
    2. I guess we could do the same thing for the twitter
    3. so the way mailchimp does this is by creating a browser-based archive copy of the campaign, which is then likeable https://mailchimp.com/help/add-a-like-button-to-a-link-in-a-campaign/
    4. ok, it looks like what we want is this "view in browser" plugin for phplist https://resources.phplist.com/plugin/viewinbrowser

Sat Sep 08, 2018

  1. I reproduced the error that Catarina got when attempting to update the microfactory site using the tatsu editor in the wordpress dashboard
    1. the error came from apache. nginx accepted it and varnish passed it to apache. Then apache, finally, rejected the call to 'wp-adnin/admin-ajax.php' with a 413 error because the file data length exceeded 131072.
==> microfactory.opensourceecology.org/error_log <==
[Sat Sep 08 17:51:47.093354 2018] [:error] [pid 18400] [client 127.0.0.1] ModSecurity: Request body no files data length is larger than the configured limit (131072).. Deny with code (413) [hostname "microfactory.opensourceecology.org"] [uri "/wp-admin/admin-ajax.php"] [unique_id "W5QMM@d1Eam-vw5aGKHpRQAAAAE"]

==> microfactory.opensourceecology.org/access_log <==
127.0.0.1 - - [08/Sep/2018:17:51:47 +0000] "POST /wp-admin/admin-ajax.php HTTP/1.0" 413 352 "https://microfactory.opensourceecology.org/wp-admin/admin-ajax.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36 OPR/39.0.2256.43"
  1. the source of this error is /etc/httpd/mod_security.conf
[root@hetzner2 httpd]# grep -irl '131072' *
conf.d/mod_security.conf
logs/microfactory.opensourceecology.org/error_log
logs/microfactory.opensourceecology.org/error_log-20180907
logs/modsec_audit.log
[root@hetzner2 httpd]#
  1. the relevant section from that config file is:
<IfModule mod_security2.c>                                                                                                  
	# ModSecurity Core Rules Set configuration                                                                              
   IncludeOptional modsecurity.d/*.conf                                                                                     
   IncludeOptional modsecurity.d/activated_rules/*.conf                                                                     
																															
	# Default recommended configuration                                                                                     
	SecRuleEngine On                                                                                                        
	SecRequestBodyAccess On                                                                                                 
	SecRule REQUEST_HEADERS:Content-Type "text/xml" \                                                                       
		 "id:'200000',phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML"                                   
	SecRequestBodyLimit 13107200                                                                                            
	SecRequestBodyNoFilesLimit 131072                                                                                       
	SecRequestBodyInMemoryLimit 131072                                                                                      
	SecRequestBodyLimitAction Reject      
...
  1. I also checked the mosecurity log, and found this
[root@hetzner2 httpd]# tail -f /var/log/httpd/modsec_audit.log
--e0cf6203-A--
[08/Sep/2018:18:00:19 +0000] W5QOM4DjLizX-0PH1C4iSQAAAAw 127.0.0.1 58464 127.0.0.1 8000
--e0cf6203-B--
POST /wp-admin/admin-ajax.php HTTP/1.0
X-Real-IP: 155.254.31.28
X-Forwarded-Proto: https
X-Forwarded-Port: 443
Host: microfactory.opensourceecology.org
Content-Length: 169317
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:48.0) Gecko/20100101 Firefox/48.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-WP-Nonce: 6019e82b06
X-Requested-With: XMLHttpRequest
Referer: https://microfactory.opensourceecology.org/wp-admin/admin-ajax.php
Cookie: wordpress_sec_8983be88aab2a9bb71a8df7c9f7251b6=maltfield%7C1536598694%7CSPF6aXvZhhNnPXcg6tp828xXUxc3tE2CeLCalwJ1JO8%7Cbd0bf9738572e47267dc4158ad07c0ba8ac788a2e5bf784b1dd028
623a7f15cc; wordpress_test_cookie=WP+Cookie+check; wordpress_logged_in_8983be88aab2a9bb71a8df7c9f7251b6=maltfield%7C1536598694%7CSPF6aXvZhhNnPXcg6tp828xXUxc3tE2CeLCalwJ1JO8%7C8a406
513c27482d06d46c366f0b3cb0545359b3fd049c50a9cb481a464dd307f; 1; OSESESSION=rj1k8q34mo0q6ih903ahhcu3mg08jegduvhmmeb6ss5tm478mpucfauft24bkojt8vtqla9ivlnj6qcn0q8lko52rsvv8dvbko0l0l0; 
wp-settings-time-1=1536428183
DNT: 1
X-Forwarded-For: 155.254.31.28, 127.0.0.1, 127.0.0.1
X-VC-Cacheable: NO:Request method:POST
hash: #microfactory.opensourceecology.org
X-Varnish: 33304491

--e0cf6203-F--
HTTP/1.1 413 Request Entity Too Large
Content-Length: 352
Connection: close
Content-Type: text/html; charset=iso-8859-1

--e0cf6203-E--

--e0cf6203-H--
Message: Request body no files data length is larger than the configured limit (131072).. Deny with code (413)
Apache-Handler: php5-script
Stopwatch: 1536429619993525 5932 (- - -)
Stopwatch2: 1536429619993525 5932; combined=173, p1=104, p2=0, p3=0, p4=37, p5=31, sr=21, sw=1, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED"

--e0cf6203-Z--
[root@hetzner2 httpd]#
  1. so there's clearly different max size settings for whether or not the request contains files or not. As we see above, this request that's being denied is one that doesn't include files. The default settings here have that SecRequestBodyNoFilesLimit set to '131072' = 128 KB. The same default config sets requests _with_ files = 'SecRequestBodyLimit' = '13107200' = 12.5 MB.
  2. from DOS perspective, it does make sense to limit the size of the requests, but I'm not sure if it makes sense to have distinct limits for requests that have or don't have files. If someone wants to do harm, they would just add a fat file to the request. Therefore, I think there's basically no harm in increasing SecRequestBodyNoFilesLimit to the current setting of SecRequestBodyLimit. Moreover, nginx should really be our DOS protector.
  3. so I went ahead and changed SecRequestBodyLimit from '131072' to '13107200'
  4. upon retrying to save, the 413 error changed to a 403 error
  5. I went to check the modsec log and, damn, yeah--that's a fucking huge request! Here's it with only the first line of the request body
[root@hetzner2 httpd]# tail -f /var/log/httpd/modsec_audit.log
...
--4cc3de17-A--
[08/Sep/2018:18:15:26 +0000] W5QRvjhQN4@9LalMESgKRQAAAAQ 127.0.0.1 34488 127.0.0.1 8000
--4cc3de17-B--
POST /wp-admin/admin-ajax.php HTTP/1.0
X-Real-IP: 155.254.31.28
X-Forwarded-Proto: https
X-Forwarded-Port: 443
Host: microfactory.opensourceecology.org
Content-Length: 169317
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36
Accept: */*
Accept-Language: en-US,en;q=0.8
Accept-Encoding: gzip, deflate, br
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-WP-Nonce: 6019e82b06
X-Requested-With: XMLHttpRequest
Referer: https://microfactory.opensourceecology.org/wp-admin/admin-ajax.php
Cookie: wordpress_sec_8983be88aab2a9bb71a8df7c9f7251b6=maltfield%7C1536598694%7CSPF6aXvZhhNnPXcg6tp828xXUxc3tE2CeLCalwJ1JO8%7Cbd0bf9738572e47267dc4158ad07c0ba8ac788a2e5bf784b1dd028623a7f15cc; wordpress_test_cookie=WP+Cookie+check; wordpress_logged_in_8983be88aab2a9bb71a8df7c9f7251b6=maltfield%7C1536598694%7CSPF6aXvZhhNnPXcg6tp828xXUxc3tE2CeLCalwJ1JO8%7C8a406513c27482d06d46c366f0b3cb0545359b3fd049c50a9cb481a464dd307f; 1; OSESESSION=rj1k8q34mo0q6ih903ahhcu3mg08jegduvhmmeb6ss5tm478mpucfauft24bkojt8vtqla9ivlnj6qcn0q8lko52rsvv8dvbko0l0l0; wp-settings-time-1=1536428183
DNT: 1
X-Forwarded-For: 155.254.31.28, 127.0.0.1, 127.0.0.1
X-VC-Cacheable: NO:Request method:POST
hash: #microfactory.opensourceecology.org
X-Varnish: 33183543

--4cc3de17-C--
action=tatsu_save_store&post_id=85&page_content=%5B%7B%22name%22%3A%22tatsu_section%22%2C%22id%22%3A%22Bk-5JP9-RG%22%2C%22atts%22%3A%7B%22bg_color%22%3A%22rgba(255%2C250%2C246%2C1)%22%2C%22bg_image%22%3A
...
--4cc3de17-F--
HTTP/1.1 403 Forbidden
Content-Length: 225
Connection: close
Content-Type: text/html; charset=iso-8859-1

--4cc3de17-E--

--4cc3de17-H--
Message: Access denied with code 403 (phase 2). Pattern match "(?i:(?:\\sexec\\s+xp_cmdshell)|(?:[\"'`\xc2\xb4\xe2\x80\x99\xe2\x80\x98]\\s*?!\\s*?[\"'`\xc2\xb4\xe2\x80\x99\xe2\x80\x98\\w])|(?:from\\W+information_schema\\W)|(?:(?:(?:current_)?user|database|schema|connection_id)\\s*?\\([^\\)]*?)|(?:[\"'`\xc2\xb4\xe2 ..." at ARGS:page_content. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "207"] [id "981255"] [msg "Detects MSSQL code execution and information gathering attempts"] [data "Matched Data: \x22selecte found within ARGS:page_content: [{\x22name\x22:\x22tatsu_section\x22,\x22id\x22:\x22Bk-5JP9-RG\x22,\x22atts\x22:{\x22bg_color\x22:\x22rgba(255,250,246,1)\x22,\x22bg_image\x22:\x22\x22,\x22bg_repeat\x22:\x22no-repeat\x22,\x22bg_attachment\x22:\x22scroll\x22,\x22bg_position\x22:\x22top left\x22,\x22bg_size\x22:\x22cover\x22,\x22bg_animation\x22:\x22none\x22,\x22padding\x22:{\x22d\x22:\x22120px 0px 120px 0px \x22,\x22l\x22:\x220px 0px 60px 0px \x22,\x22t\x22:\
Action: Intercepted (phase 2)
Apache-Handler: php5-script
Stopwatch: 1536430526571778 46012 (- - -)
Stopwatch2: 1536430526571778 46012; combined=39101, p1=135, p2=38938, p3=0, p4=0, p5=28, sr=20, sw=0, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED"

--4cc3de17-Z--
    1. added '981255' (sqli) to the wordpress whitelist
  1. that worked! I was able to change some content, and the admin-ajax.php request got a 200 back. I cleared the vanish cache, reloaded in another browser, and it showed the new content.
  2. I sent an email to Catarina asking her to retry.
    1. Catarina confirmed that it's now working
  1. ...
  1. I went to add the confirmation code permitting privacy@opensourceecology.org to marcin@opensourceecology.org that Marcin forwarded to me, but I couldn't login. The password in keepass appeared to only be 20 characters. I just went ahead and generated a new 100-character password, stored to keepass, updated in the admin.google.com (g-suite) console, and logged-in. Then it asked me to create a new password _again_, so I generated a new 100-character password in keepass & stored it.
  2. Finally, I added the confirmation code for forwarding emails to Marcin
    1. note that the obvious place for forwarding mail only allows you to forward to 1x address at time. To achieve a forward to 2x addresses, I created a filter that would trigger when the message was "> 0 bytes" (ie: always trigger on all mail). I created 2x filters, one for each address per https://webapps.stackexchange.com/questions/50372/auto-forwarding-emails-to-2-email-addresses

Fri Sep 07, 2018

  1. Catarina said she's having issues uploading some content to the new microfactory wordpress site. She's getting a 413. I asked if he images were reasonably sized
  2. The CEO of phplist (user samtuke) just awarded my user account on discuss.phplist.org the "New User of the Month" badge for my recent 3x posts suggesting improvements to their documentation (which hasn't yet been done) https://discuss.phplist.org/u/maltfield/badges
  3. I came across a couple more posts on the phplsist discussion board about GDPR
    1. https://discuss.phplist.org/t/gdpr-re-permission-campaigns-invite-plugin/4086/4
    2. https://discuss.phplist.org/t/using-phplist-for-compliance-with-the-gdpr-manual-chapter-feedback-and-discussion/3985/4

Wed Sep 05, 2018

  1. Marcin got back to me about the phplist template. He said the template was built by Tristan Copley-Smith, but he said that was for Mailchimp.
    1. I sent an email to Tristan asking if they built the template and, if so, how. And, if not, do they know who did?
    2. It looks like Tristan's user account was deleted by Tom Griffing on 2015-06 https://wiki.opensourceecology.org/wiki/User:Tristan
    3. Tristan responded almost immediately via mobile indicating that they used a service called Campaign Monitor
    4. Indeed, we have an article on them https://wiki.opensourceecology.org/wiki/Campaign_Monitor
  1. so I don't know how Campaign Manager works, but I don't think this template is going to translate well to phplist. For example, there's no auto-gen for a table of contents https://www.phplist.org/manual/ch023_advanced-templating.xhtml
    1. I don't think we're going to be able to do 2x columns without making the email writer learn a whole lotta html. Or copying & pasting between other programs. I think the best option is to just simplify it to one column.
  1. I started poking through the phplist plugins to see if there was anything for templating. There's only 40x plugins plublished. https://resources.phplist.com/plugins/start
    1. there's some interesting plugins to automatically send campagins to subscribers in intervals after they've initially subscribed. So, like, if we were selling things, we could try to keep people's interest 30 days after their subscription by sending them a 50% off coupon. I guess that's what it's for.
    2. there's some plugins for putting CAPTCHAs on the subscribe field. Would bots really want to have us send _them_ spam?
    3. that's all that's up-to-date. Most of these plugins haven't been updated in over 3 years.
    4. ah, nice, I found what I was looking for: the "Content Areas" plugins allows us to create distinct field and incorporate them into different areas of the template. It also includes generating a TOC https://resources.phplist.com/plugin/contentareas
      1. unfortunately, it hasn't been updated since 2015-10.
      2. here's the sourcecode https://github.com/bramley/phplist-plugin-contentareas
    5. the Display in Browser plugin is probably useful https://resources.phplist.com/plugin/viewinbrowser
    6. I wonder how many of these plugins are no longer developed because their functionality has just be incorporated into the main phplist program
    7. there's a plugin for campaign stats, but I think that's already incorporated into phplist.. https://resources.phplist.com/plugin/campaignstatistics
    8. there's some stuff for WYSIWYG, for example using FCKeditor https://resources.phplist.com/plugin/fckphplist
    9. or tinymce https://resources.phplist.com/plugin/tinymce
    10. interesting, there's a plugin to ban subscribers using disposable email accounts, like mailinator https://resources.phplist.com/plugin/preventdisposable
      1. I guess this is for people who make it required to submit your email in order to download something or get a freebie. We wouldn't do that slimy shit, anyway. Not to mention that I think it violates GDPR
  1. I also found a list of example templates for phplist here https://resources.phplist.com/templates
  2. but the above doesn't have any previews. here's some more in github https://github.com/phpList/phplist-templates
  1. Marcin reiterated that we should just copy alephobjects, and he linked me to their Operations Manual http://devel.alephobjects.com/ao/documentation/AOOM/AOOM.pdf
    1. looks like this is the url for their phplist site https://phplist.alephobjects.com
    2. note that the root of this phplist vhost for lulzbot = alephobjects just does a html meta redirect to the '/lists' dir
user@personal:~$ curl -i https://phplist.alephobjects.com
HTTP/1.1 200 OK
Date: Wed, 05 Sep 2018 20:41:53 GMT
Server: Apache/2.4.10 (Debian)
Last-Modified: Mon, 22 Feb 2016 22:52:18 GMT
ETag: "264-52c63af4d0c80"
Accept-Ranges: bytes
Content-Length: 612
Vary: Accept-Encoding
Content-Type: text/html

<!DOCTYPE HTML PUBLIC "-W3CDTD HTML 4.01 Transitional//EN">
<html>
<head>
<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<meta http-equiv="Refresh" content="2;url=http://phplist.alephobjects.com/lists">
<title>phplist.alephobjects.com</title>
</head>
<body bgcolor=#FFFFFF>
<div align="center">
<p>
You are being redirected to a sign up list for email newsletters maintained by Aleph Objects, Inc., makers of the LulzBot desktop 3D printers.
</p><br/><br/>
<a href="http://www.phplist.com"><img src="lists/images/power-phplist.png" border=0></a>
</div>
</body>
</html>
user@personal:~$ 
    1. I emailed Claudio at alphalabs to let them know about the 404, and ask how they approached GDPR


Tue Sep 04, 2018

  1. Marcin sent me the google doc of our OSEmail subscribers
    1. wow, there's 1,155 rows on here (most are subscribe, a few are unsubscribe..hmm)
    2. we have 4 fields: timestamp, email address, what would you like to do (subscribe, unsubscribe), how did you find about us?
  2. Marcin sent me an email about some users being blocked, but he didn't specify what they were being blocked from or why. I asked for clarification.
  3. I dug through the non-mediawiki interwiki sites to find their privacy policies, hoping to find something simple yet similar to ours needs as it's specific to mediawiki, and already open sourced https://wiki.opensourceecology.org/wiki/Special:Log/interwiki
    1. http://www.appropedia.org/Appropedia:Privacy_policy
      1. this is decent, non-generic, and covers another site outside the wiki. It's clearly not GDPR compliant, though.
    2. http://akvopedia.org/wiki/Akvopedia:Privacy_policy
      1. this is also pretty good. Like the above, it was derived from mediawiki's privacy policy. Maybe we can cherry-pick from both?
    3. https://blog.p2pfoundation.net/privacy-policy
      1. this is pretty generic, nothing specific about mediawiki here. not sure we want that.
    4. http://wiki.fablab.is/wiki/Fab_Lab_Wiki_-_by_NM%C3%8D_Kvikan:Privacy_policy
      1. empty page x_x
  4. I created a new email 'privacy@opensourceecology.org' and stored the creds in keepass. I logged-in & set it up to forward to me & Marcin's email addresses. I confirmed for me, but I need Marcin to send me his confirmation # in order to get it setup to forward to him. I emailed him asking for this confirmation number.
  5. I added actual content to our privacy policy page https://wiki.opensourceecology.org/wiki/Open_Source_Ecology:Privacy_policy
    1. I started with a dump of the Apropedia wiki's privacy policy. Rather than edit it locally, I made incremental changes on the wiki so it would be visible all the changes that I made from the Apropedia base
    2. I did a diff of our new page's source and Akvopedia's Privacy Policy source so I could see what Apropedia had changed. I made a few changes
    3. I updated the security section, copying from the current mediawiki Privacy Policy. Note that under GDPR we _do_ have a responsibility to maintain the security of the server (which we are doing), and that should be specified in the Privacy Policy
    4. I added some sections about our requirements for data retrieval/rectification/deletion/etc to the User Data section.
    5. so far it looks pretty good. The only changes I can think of would be to list _what_ data we store. That may or may not be necessary.
  6. I sent an email to Marcin & Catarina about the new Privacy Policy

Mon Sep 03, 2018

  1. did some reading up on the Wikipedia Privacy policy. It's very non-trivial. https://foundation.wikimedia.org/wiki/Privacy_policy
    1. it was approved and went into effect in May, just before GDPR. There's no explicit reference to GDPR, but there is a FAQ with info on how to contact wikimedia for requesting to retrieve or delete PII.
    2. unfortunately, this article links to a dozen other articles on polices & procedures that we would have to create & maintain on our wiki. One does not simply adopt mediawiki's ToS, unfortunately.
    3. maybe I could just take their document, and replace all the links for "click hear to learn how to.." with "email us at here to make this request". User self-service would be nicer, but the number of info requests should be low enough to be manageable, and probably require human admin intervention anyway.
    4. wow, I'm not even sure if this is GDPR-compliant:
      1. "The anonymization process cannot ensure complete or comprehensive anonymization of all of the content or information posted on Wikimedia Sites related to your prior username. If your request is granted, the name change will only occur in automatically generated logs (such as page histories) in association with content that you posted. The name change will not delete mentions of your prior username by third parties. For example, if you changed your username from MichaelPaul to Owlwatcher345, the content you contributed will be attributed to Owlwatcher345, but if another user has mentioned you by the name MichaelPaul in a discussion page, MichaelPaul will continue to appear rather than Owlwatcher345." https://foundation.wikimedia.org/wiki/Privacy_policy/FAQ#Can_I_delete_and/or_anonymize_any_content_I_post_on_any_Wikimedia_Site,_if_I_don%27t_want_to_be_personally_identified?_If_so,_how?
    5. this was explicitly called-out in the discussion. Even though the new Privacy Policy came out the same month that GDPR went into effect, it's recognized to *not* be GDPR compliant. Especially to Article 13 of GDPR. https://meta.wikimedia.org/wiki/Talk:Privacy_policy?rdfrom=%2F%2Ffoundation.wikimedia.org%2Fw%2Findex.php%3Ftitle%3DTalk%3APrivacy_policy%26redirect% 3Dno#GDPR
    6. here is the article 13 https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-and-organisations/principles-gdpr/what-information-must-be-given-individuals-whose-data-collected_en
    7. ugh, note that mediawiki has a distinct privacy policy for donors https://foundation.wikimedia.org/wiki/Donor_privacy_policy/en
  2. we probably want to take a much simpler privacy policy. For example, lulzbot's is pretty straight-forward
    1. https://www.lulzbot.com/content/privacy-policy
  3. I stumbled upon wikimedia's green Sustainability Initiative https://meta.wikimedia.org/wiki/Sustainability_Initiative
    1. they mentioned Greenpeace USA's "Click Clean Scorecard" https://meta.wikimedia.org/wiki/Sustainability_Initiative
    2. I sent them an email at greeninternet@greenpeace.org asking wif they're interested in adding nonprofits to the list or if it's just for

Sun Sep 02, 2018

  1. continued through the phplist manual https://www.phplist.org/manual/ch005_logging-in.xhtml
  2. there were a few tips on how to run the "first campaign" doc that may be good to remember https://www.phplist.org/manual/ch007_sending-your-first-campaign.xhtml
    1. keep subject line <50 chars
    2. We may want to set the "from" field as 'OSE', but I'm thinking saying 'Marcin Jakubowski"
    3. Interestingly, phplist will replace URLs inserted with their WYSIWYG editor with an interstitial page for tracking statistics. Not sure if like.
      1. this is not done when the anchor text (withing '<a...>' & '</a>' tags) is a url itself, to prevent bing red-flagged
  3. sent an email to marcin asking general info about the html in the latest OSEmail = OSEmail 5.0, and if he wanted me to just import that into phplist or change it? I also asked who built it and how? https://opensourceecology.createsend1.com/t/ViewEmail/j/15C76874EE2BF993/076FCCA596DF178A9A8E73400EDACAB4
  4. sent an email to Claudio asking if they had any phplist plugins or settings that they recommend using..
  5. phplist does indeed have email templates https://www.phplist.org/manual/ch008_your-first-campaign.xhtml
    1. important: if your "lists" (of subscribers) in phplist have the same email address in many lists, and you choose to send a given campaign to multiple lists (so the total set then contains the email address more than once), phplist will only send that given subscriber a single message; it accounts for this.
    2. this document mentioned something about having to leave the window open while processing a campaign in the queue. Hopefully that's not strictly necessary..
  6. this chapter has useful info about understanding phplist's stats https://www.phplist.org/manual/ch009_basic-campaign-statistics.xhtml
    1. this has more detailed info on stats https://www.phplist.org/manual/ch026_advanced-campaign-statistics.xhtml
  7. this page has an interesting list of column names for data that phplist will automatically understand. For example: email, password, optedin, htmlemail, etc https://www.phplist.org/manual/ch013_adding-subscribers-to-lists.xhtml
  8. this page has info on creating a subscription page, which I'll need to do shortly https://www.phplist.org/manual/ch014_creating-a-subscribe-page.xhtml
    1. it suggests making the firstName & lastName fields optional. This is a good idea.
    2. this page says that, after we make a subscription page, we can use it in a special Admin mode so that we can manually enter subscriber info, ie: after a workshop we add emails from a physical pen-and-paper-on-clipboard signup sheet. Note that there's an added box that we can use to make the user still opt-in following our entry. This is a good idea for the GDPR requirements ("Send this subscriber a request for confirmation email")
    3. # this page describes how to make changes to the subscription page https://www.phplist.org/manual/ch025_subscribe-page-design-and-configuration.xhtml
  9. info page on how to use categories https://www.phplist.org/manual/ch015_setting-up-your-list-categories.xhtml
    1. cool, their example includes the 'veganism' category. do like <3
  10. there's a section on attributes https://www.phplist.org/manual/ch016_user-attributes.xhtml
    1. some attributes that I can think of that may be useful
      1. how did you hear about us (we already have this)?
      2. interests (ie: permaculture, earthen construction, plastics)
      3. skills (ie; electrical, mechanical, software development, documentation, cad)
  11. finally, a page on placeholders arises. this is like variables. apparently in phplist, the syntax is to surround the attribute with square brackets, ie: "Hi [FIRSTNAME]," https://www.phplist.org/manual/ch017_using-placeholders.xhtml
    1. there's also built-in placeholders (variables) such as [UNSUBSCRIBE] or [WEBSITE] (from the config)
    2. we can also have a fall-back (by appending a double-percent-sign) for the placeholder when the subscriber entry doesn't have the info we need. For example, "Dear [FIRSTNAME%%Friends]".
      1. it is recommended that our tests include users with missing attributes so that we can test for both cases
  12. this guide describes how to segment lists. it mentions how it can be used for a/b testing, but I assumed this was a more guided process. perhaps ab tests are just manually done by firing-off two distinct campaigns with list segmentation? hmm https://www.phplist.org/manual/ch018_targeting-your-campaigns.xhtml
  13. there's some great info on how to create a template here https://www.phplist.org/manual/ch022_creating-a-template.xhtml
    1. so first I need to hear back from Marcin about his intentions to keep/diverge from the OSEmail 5.0 template. I should be able to mostly adopt it by copying & pasting the html, testing, and making changes as needed.
    2. this also lists all the built-in placeholders/variables we can use in the templates https://www.phplist.org/manual/ch023_advanced-templating.xhtml

Fri Aug 31, 2018

  1. Catarina sent a link about GDPR and how it applies to bloggers
    1. https://www.nomipalony.com/gdpr-for-bloggers/
    2. "Consent must specifically cover the controller’s name, the purposes of the processing and the types of processing activity."
    3. ^ we need to make sure that the "I consent" checkbox (which should be unchecked by default) includes this information. It would also be good if the DB includes whether or not the person did it on our webiste (what URL/what time?) or signed a paper sheet which Marcin later inputted (where, what time?).
  2. I need to find a GDPR-aware, ToS generator. Ideally, one with options so we can choose "mention that we have the right to send you emails," "mention that we will not be sharing your info with third-parities", etc.
    1. I found one https://gdprprivacypolicy.net/
    2. the mediawiki article from yesterday also recommends this, but it's only in german https://datenschutz-generator.de/
    3. we may want to remove the 'email address' field from comments. instead, could we just use a captcha?
    4. this is a good spreadsheet that the ICO recommends we use to keep our ducks in a row https://ico.org.uk/media/for-organisations/documents/2172937/gdpr-documentation-controller-template.xlsx
  3. the mediawiki guide makes note of the fact that mediawiki--by design--doesn't allow for deletion of users or that user's actions https://www.mediawiki.org/wiki/GDPR_(General_Data_Protection_Regulation)_and_MediaWiki_software
    1. probably the best option is to create a user called "Deleted User", then use Extension:UserMerge to merge the user-to-be-"deleted" with the deleted user as described here https://wiki.opensourceecology.org/wiki/Mediawiki#Deleting_Users_by_Request
  4. Interesting, what if someone maliciously dumped a user or set of user's PII to our wiki? We could delete it, but we couldn't eradicate it from the history of the page unless we used something like https://www.mediawiki.org/wiki/Help:RevisionDelete
    1. we can also use Extension:UserAgreement to force people to accept our (new) ToS https://www.mediawiki.org/wiki/Extension:UserAgreement
  5. ""A checkbox on the sign-up form to accept the privacy policy (registration is rejected if the check is not ticked). The value of this checkbox should be recorded in database.
    1. important: we shouldn't just make code checking for this check-box be the blocker; it must also be recorded in the db when the checkbox was ticked!
  6. shit, that article also mentions disqus. What are our gdpr responsibilities for using disqus? https://www.mediawiki.org/wiki/Topic:Uddcz0ah9i70au4o
    1. in general, I hate discus and would prefer a self-hosted JS-free option like Extension:Comments https://www.mediawiki.org/wiki/Extension:Comments
    2. disqus posted about this in early May, so I don't see why using them would be an issue.. https://blog.disqus.com/update-on-privacy-and-gdpr-compliance
    3. so discques mentions that if we implemented SSO with discus, then we may have responsibilities. otherwise, they're the data controller--not us.
    4. fuck, disqus mentions that they collect your browser's plugin types & versions. That sounds like fingerprinting for cross-site tracking :\ https://help.disqus.com/terms-and-policies/disqus-privacy-policy
    5. "We do not collect any Special Categories of Personal Data about you (this includes details about your race or ethnicity, religious or philosophical beliefs, sex life, sexual orientation, political opinions, trade union membership, information about your health and genetic and biometric data). Nor do we collect any information about criminal convictions and offences."
      1. that's a good list of stuff we don't want to collect. But why not trade union membership :(
  7. I ran across an article about "cold emailing," which suggests it's not outlawed. In our case, we'll be just doing an initial email asking people if they want to opt-in to the OSEmail newsletter. We'll only be using email addresses we already have from our wiki users, so it's not entirely cold. It's a very reasonable assumption that they will find the content useful.

Thr Aug 30, 2018

  1. reading though https://www.mediawiki.org/wiki/GDPR_(General_Data_Protection_Regulation)_and_MediaWiki_software
    1. we may want to use these extensions
    2. https://www.mediawiki.org/wiki/Extension:CookieWarning
  2. Google webmaster tools yelled at us for having so many 404s in our sitemap that mediawiki produced. No issue here, those 187 URLs it complained about don't exist (probably they used to, and mediawiki doesn't like to delete stuff from its db).
    1. Catarina was concerned, and I told her not to put much stock on these google webmaster tools alerts. Instead, we should pay more attention to StatusCake, which reported on 2018-08-18 that we had 100% uptime for all our 6 websites last month. Woohoo!
  3. I decided to revisit the idea of mining cryptocurrency (ie: coinhive) on our wiki. Today is the last day of the month of August Checking awstats, I see that we got 66,118 visits so far this month with an average time on the site of 215 seconds. Then our site was open for roughly 66118×215 = 14215370 seconds in our client's browsers this month.
    1. coinhive says 30 hashes/sec is reasonable. https://coinhive.com/info/faq
    2. Coinhive pays out in 0.000064 XMR per 1 million hashes. We'll be generating 14215370 seconds/month * 30 hashes/s = 426,461,100 hashes per month. 426461100÷1000000 = 426.4611 * 0.000064 xmr = 0.02729351 xmr / month.
    3. at current exchange rates, that's $2.77/mo. Better than the $0.10/mo that I calculated back in 2015-05-26, but still not reasonable.
further digging into our awstats suggests that we have much room for improvement for google searches leading to our site. It might be a good idea to add meta tags with some generic seo mediawiki extension...

Wed Aug 29, 2018

  1. fell into a hole researching GDPR. We do have many users in the EU. This just went live 3 months ago, and we're about to put together a newsletter for phplist, so it's time to ramp-up.
  2. Sent an email to Marcin & Catarina asking if they're familiar with GDPR and if we need to make any changes that they know of
    1. https://en.wikipedia.org/wiki/General_Data_Protection_Regulation
    2. https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/

Tue Aug 28, 2018

  1. began reading the phplist manual https://www.phplist.org/manual/ch001_system-overview.xhtml
  2. sent an email to marcin & catarina asking for a list of users who are currently on our mailing list
  3. looks like one of our first tasks is to integrate a form for signing up to the mailing list on our website
  4. we should also think about which attributes to store about each subscriber, other than first name, last name, and email address.
  5. we probably want to union the set of emails that marcin is currently sending to with the list of emails of users on the wiki. Then I'll send a first-ever phplist email campaign notifying users that we've setup phplist to better keep them informed with the progress of OSE, including a link to unsubscribe.

Mon Aug 27, 2018

  1. hetzner responded saying that october 15th is the earliest possible cancellation period due to the "cancellation period"
Dear Mr Altfield

Thank you for your e-mail.

Unfortunately it is not possible to cancel the server with immediate effect.
So the cancellation date shown in your KonsoleH account is the earliest
possible cancellation date due to the cancellation period.

Thank you for your understanding.

Kind regards

Magdalena Grimm

Hetzner Online GmbH
Industriestr. 25
91710 Gunzenhausen / Germany
Tel: +49 9831 505-0
Fax: +49 9831 505-3
www.hetzner.com

Registergericht Ansbach, HRB 6089
Geschäftsführer: Martin Hetzner
  1. The hetzner policy does state that the contract can be terminated "at any time during a period of 30 days to the end of the month" I don't know what the fuck that means, but it does say 30 days https://www.hetzner.com/rechtliches/agb
9.2 The contract is cancellable without giving reasons by both parties at any time during a period of 30 days to the end of the month, but at the earliest on expiry of the minimum contract period stipulated in the contract. A cancellation can be done in writing by letter, fax, email or via the secure online administrations interface, provided this option is available. 
  1. I went ahead and initiated the contract cancellation on knosoleh per their instructions. The earliest cancellation date was 2018-10-15, which I selected. This was the response:
You cancellation has been recorded in our system under the number K18100432 and is now in place. You will soon receive a confirmation and a summary of your cancellation information via e-mail.

Should you decide to change the cancellation data in any way, or you decide to reverse the cancellation, simply let us know at any time by writing us a support request via konsoleH.
  1. I sent an email to Marcin about this, informing him that Hetzner is making us pay for an additional 53 days after the time we initiated the cancellation request via email.
  2. I created the microfactory site
  3. I updated the cert to include microfactory
[root@hetzner2 htdocs]# certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/www.opensourceecology.org/htdocs/ -d opensourceecology.org  -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org  -w /var/www/html/staging.opensourceecology.org/htdocs -d staging.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/d3d.opensourceecology.org/htdocs/ -d d3d.opensourceecology.org -w /var/www/html/3dp.opensourceecology.org/htdocs/ -d 3dp.opensourceecology.org -w /var/www/html/microfactory.opensourceecology.org/htdocs/ -d microfactory.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d munin.opensourceecology.org -d phplist.opensourceecology.org 
...
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/opensourceecology.org/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/opensourceecology.org/privkey.pem
   Your cert will expire on 2018-11-25. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

[root@hetzner2 htdocs]# 
  1. I added the oshine theme + all the dependent plugins
  2. I added users for marcin & cmota as admins
  3. I sent an email to Marcin & Catarina that the microfactory site is up
  1. ...
  1. I'm still fighting with getting nginx to redirect http traffic to be https when running a server on a non-standard port (4443) for phplist
  2. I found that one solution is to catch error code 497 & redirect the "error page" to just use the "https" schema. 497 is specifically defined as "HTTP Request Sent to HTTPS Port" in nginx https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error
  3. I used "error_page 497 301 =301 https://$subdomain.$domain$request_uri;", and it worked https://stackoverflow.com/questions/8768946/dealing-with-nginx-400-the-plain-http-request-was-sent-to-https-port-error
  4. now if only I could figure out why phplist is going to the awstats vhost. next step: add debug lines https://easyengine.io/tutorials/nginx/debugging/

Sat Aug 25, 2018

  1. hetzner responded telling us to do the cancellation of hetzner1 via konsoleh. Apparently we don't want to "delete our account" but we want to "cancel our contract". The difference in the konsoleh wui is the navigation via "Contract -> Cancel Account"
  2. ugh, I went to cancel the contract, but the drop-down menu of when it's to become effective only allows us to choose a date for 15th of every month, starting in October. It's currently only August. I'm not submitting a form stating that we affirm that we want our server shut down then! We want it shut down now. And we don't want to pay for what we don't need. The server has already been offline for one month, anyway.
  3. I sent Magdalina at hetzner a response asking again for them to cancel our contract, effective the day that I submitted the initial request on 2018-08-23.
Magdalena,

Please cancel our contract for this "MS 5000" effective 2018-08-23, when I initiated this request.

The method you described via konsoleh only gives us the earliest cancellation date of 2018-10-15. We do not wish to pay for our service through October 15th. Indeed, our server has been offline for over a month, and we notified you over a month ago that our cancellation was imminent. On 2018-08-23, we gave you notice that we wanted our account terminated immediately.

Again, please cancel our shared hosting contract (Client number: C0704628411). Note that we have a distinct contract for a dedicated host (EX41S-SSD #542193 138.201.84.223), and that we do not want to cancel that contract! The EX41S-SSD server is a live production server while the dedi978.your-server.de server is currently offline per this request last month. Please cancel only our shared hosting contract on server = dedi978.your-server.de.


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org
  1. I spent some time documenting the phplist install issues with random_compat needing libsodium installed
    1. https://tech.michaelaltfield.net/2018/08/25/fix-phplist-500-error-due-to-random_compat/
    2. https://wiki.opensourceecology.org/wiki/Phplist#libsodium
    3. https://discuss.phplist.org/t/common-installation-errors-manual-chapter-feedback-and-discussion/217/2
    4. https://serverfault.com/questions/927998/why-is-phplist-responding-with-500-internal-server-error
    5. https://github.com/paragonie/random_compat/issues/99

Thr Aug 23, 2018

  1. Marcin got the email from phplist, but when he attempted to load, it he got an error back from the server that he was trying to speak http on an https port. The link he was sent was this: http://phplist.opensourceecology.org:4443/lists/admin/?page=login&token=someLongHexTokenHere
user@ose:~$ curl -i "http://phplist.opensourceecology.org:4443/lists/admin/?page=login&token=someLongHexTokenHere"
HTTP/1.1 400 Bad Request
Server: nginx
Date: Thu, 23 Aug 2018 18:36:43 GMT
Content-Type: text/html
Content-Length: 264
Connection: close

<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx</center>
</body>
</html>
user@ose:~$ 
  1. so we already redirect most http calls to or server to https, but we do that by listening on port 80. All requests sent to port 80 are 301 redirected to https with this block in /etc/nginx/nginx.conf
   # redirect all port 80 requests to use https
   server {

	  include conf.d/secure.include;

	  listen 138.201.84.243:80;
	  listen 138.201.84.223:80;
	  listen [::]:80;

	  # force all traffic to use https
	  server_name ~^(?<subdomain>[^\.]+)\.(?<domain>.+)$;
	  return 301 https://$subdomain.$domain$request_uri;
   }
  1. of course, the above block won't do anything if the URL itself specifies a non-port-80 port. How are we supposed to redirect then? It's hard to google, since most of the "how to redirect http to https" solutions follow the best practice employed above = listen on 80 & redirect to https. I found some other downvoted options that use an if check for $schema (which is 'http' or 'https'). I tried this, but it _still_ wouldn't redirect! https://nginx.org/en/docs/http/ngx_http_core_module.html#variables
  2. so the access log for phplist vhost queries goes to /var/log/nginx/access_log, rather than the desired /var/log/nginx/phplist/access_log. In fact, the latter phplist-specific log dir is empty! That tells me that nginx is matching the wrong vhost, which is why it's not listening to my changes to the config file.
  3. digging deeper at all the vhosts listening on port 4443 on the ose ip (excluding obi, which is a distinct ip), I found 3x = munin, awstats, and phplist
[root@hetzner2 conf.d]# grep -ir '4443' *
awstats.openbuildinginstitute.org.conf:   listen 138.201.84.223:4443;
awstats.opensourceecology.org.conf:   listen 138.201.84.243:4443;
munin.opensourceecology.org.conf:   listen 138.201.84.243:4443;
phplist.opensourceecology.org:   listen 138.201.84.243:4443;
[root@hetzner2 conf.d]# 
  1. checking those files, I found that awstats.opensourceecology.org.conf didn't specify a vhost-specific log file. I changed that, restarted it, and--lo and behold--I found the log entries for phplist going to /var/log/nginx/awstas.opensourceecology.org/access_log
[root@hetzner2 nginx]# tail -f awstats.opensourceecology.org/*
==> awstats.opensourceecology.org/access.log <==
73.252.245.128 - - [23/Aug/2018:19:10:33 +0000] "GET /lists/admin/?page=login&token=hexTokenHere HTTP/1.1" 400 264 "-" "curl/7.52.1" "-"
  1. but why is nginx sending this request to the wrong vhost? I'm not even speaking https, so it should be able to clearly see the Host field as phplist.opensourceecology.org. To be sure, I confirmed this with tcpdump. Yep, in the hex output you can see the host literally defining 'phplist.opensourceecology.org. Only one of our configs says that's the server_name. Why does it go to the awstats one?
[root@hetzner2 ~]# tcpdump port 4443 -X
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes


19:14:43.537569 IP c-73-252-245-128.hsd1.ca.comcast.net.35808 > opensourceecology.org.pharos: Flags [S], seq 547431718, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 6], length 0
		0x0000:  4500 0034 6e7a 4000 2706 c610 49fc f580  E..4nz@.'...I...
		0x0010:  8ac9 54f3 8be0 115b 20a1 2526 0000 0000  ..T....[..%&....
		0x0020:  8002 7210 fac5 0000 0204 05b4 0101 0402  ..r.............
		0x0030:  0103 0306                                ....
19:14:43.537580 IP opensourceecology.org.pharos > c-73-252-245-128.hsd1.ca.comcast.net.35808: Flags [S.], seq 3742495441, ack 547431719, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
		0x0000:  4500 0034 0000 4000 4006 1b8b 8ac9 54f3  E..4..@.@.....T.
		0x0010:  49fc f580 115b 8be0 df11 f2d1 20a1 2527  I....[........%'
		0x0020:  8012 7210 1f60 0000 0204 05b4 0101 0402  ..r..`..........
		0x0030:  0103 0307                                ....
19:14:43.723377 IP c-73-252-245-128.hsd1.ca.comcast.net.35808 > opensourceecology.org.pharos: Flags [.], ack 1, win 457, length 0
		0x0000:  4500 0028 6e7b 4000 2706 c61b 49fc f580  E..(n{@.'...I...
		0x0010:  8ac9 54f3 8be0 115b 20a1 2527 df11 f2d2  ..T....[..%'....
		0x0020:  5010 01c9 d9e9 0000 0000 7735 39af       P.........w59.
19:14:43.723394 IP c-73-252-245-128.hsd1.ca.comcast.net.35808 > opensourceecology.org.pharos: Flags [P.], seq 1:161, ack 1, win 457, length 160
		0x0000:  4500 00c8 6e7c 4000 2706 c57a 49fc f580  E...n|@.'..zI...
		0x0010:  8ac9 54f3 8be0 115b 20a1 2527 df11 f2d2  ..T....[..%'....
		0x0020:  5018 01c9 9a8e 0000 4745 5420 2f6c 6973  P.......GET./lis
		0x0030:  7473 2f61 646d 696e 2f3f 7061 6765 3d6c  ts/admin/?page=l
		0x0040:  6f67 696e 2674 6f6b 656e 3d38 6639 6330  ogin&token=8f9c0
		0x0050:  6366 6239 3634 3634 6164 6362 3362 6433  cfb96464adcb3bd3
		0x0060:  6437 6163 6634 3366 3633 3020 4854 5450  d7acf43f630.HTTP
		0x0070:  2f31 2e31 0d0a 486f 7374 3a20 7068 706c  /1.1..Host:.phpl
		0x0080:  6973 742e 6f70 656e 736f 7572 6365 6563  ist.opensourceec
		0x0090:  6f6c 6f67 792e 6f72 673a 3434 3433 0d0a  ology.org:4443..
		0x00a0:  5573 6572 2d41 6765 6e74 3a20 6375 726c  User-Agent:.curl
		0x00b0:  2f37 2e35 322e 310d 0a41 6363 6570 743a  /7.52.1..Accept:
		0x00c0:  202a 2f2a 0d0a 0d0a                      .*/*....
19:14:43.723398 IP opensourceecology.org.pharos > c-73-252-245-128.hsd1.ca.comcast.net.35808: Flags [.], ack 161, win 237, length 0
  1. I tried moving the 'server_name' definitions to be the first line of the server block in both the awstats & phplist nginx config files, but it made no difference.
  2. I'll have to tackle this next week. For now, I have to cancel our hetzner 1 contract!
  1. ...
  1. I went to delete the account, but I got an error = "We could not delete your account. You still have active products on this account. Please first cancel all products on this account."
  2. So I went back to Domains -> Account Mangement -> Delete account. I was presented with a drop-down containing 10 accounts. I tried to delete the first one (blog.opensourceecology.org), but then it complained that the account had dependencies on the DB!
  3. So I went to Domains -> Databases. I saw a list of 12 databases: oseblog, osecivi, osedrupal, oseforum, openswh, ose_fef, ose_website, osesurv, osewiki, microft_db2, microft_drupal1, microft_wiki
  4. I confirmed that we had all of these already backed-up https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation#2018-07-16
  5. I went to delete the first database, and I got an error = "The database could not be deleted. Please contact our support team."
  6. I give up; I responded to the email asking them to immediately cancel our contract.

Wed Aug 22, 2018

  1. I updated the Email List document to include phplist https://wiki.opensourceecology.org/wiki/Email_List#2018_Revisit
  2. My question about hardening the phplist install was approved, but I have no response yet https://discuss.phplist.org/t/installing-phplist-manually-manual-chapter-feedback-and-discussion/212/31
  3. I'm still not sure why phplist merely dumps a 500 error back at us without any logs
  4. I started digging thorough the phplist code. Oh my, how awful! It's just a wall of code with very little structure or comments. Ick!
  5. I finally decided to install xdebug so I could see what the fuck is actually going on. I intend to disable it in php.ini, but enable it in the vhost config file only https://stackoverflow.com/questions/15423705/how-can-i-use-xdebug-to-debug-only-one-virtual-host
    1. this will also allow us to use profiling, which I've always wanted
[root@hetzner2 admin]# yum search xdebug
Loaded plugins: fastestmirror, replace
Loading mirror speeds from cached hostfile
 * base: mirror.wiuwiu.de
 * epel: mirror.wiuwiu.de
 * extras: mirror.wiuwiu.de
 * updates: mirror.wiuwiu.de
 * webtatic: uk.repo.webtatic.com

N/S matched: xdebug
php-composer-xdebug-handler.noarch : Restarts a process without xdebug
php-pecl-xdebug.x86_64 : PECL package for debugging PHP scripts
php55w-pecl-xdebug.x86_64 : PECL package for debugging PHP scripts
php56w-pecl-xdebug.x86_64 : PECL package for debugging PHP scripts
php70w-pecl-xdebug.x86_64 : PECL package for debugging PHP scripts
php71w-pecl-xdebug.x86_64 : PECL package for debugging PHP scripts
php72w-pecl-xdebug.x86_64 : PECL package for debugging PHP scripts

  Name and summary matches only, use "search all" for everything.
[root@hetzner2 admin]# php -v
PHP 5.6.33 (cli) (built: Jan 14 2018 08:07:11) 
Copyright (c) 1997-2016 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
	with Xdebug v2.5.5, Copyright (c) 2002-2017, by Derick Rethans
[root@hetzner2 admin]# yum install php56w-pecl-xdebug
...
Installed:
  php56w-pecl-xdebug.x86_64 0:2.5.5-2.w7                                                                                                  

Complete!
[root@hetzner2 admin]# 
  1. there were no options for xdebug added to /etc/php.ini by the install, but there was a new file at /etc/php.d/xdebug.ini

[root@hetzner2 admin]# grep -i 'xdebug' /etc/php.ini [root@hetzner2 admin]# cat /etc/php.d/xdebug.ini

Enable xdebug extension module

zend_extension=/usr/lib64/php/modules/xdebug.so [root@hetzner2 admin]# </pre>

  1. I added these lines to the new xdebug.ini file to disable it by default
[root@hetzner2 admin]# cat /etc/php.d/xdebug.ini 
; Enable xdebug extension module
zend_extension=/usr/lib64/php/modules/xdebug.so
xdebug.remote_enable off
[root@hetzner2 admin]# 
  1. I restarted apache, then temporarily allowed the phpinfo() function, and created a script in obi's site and in phplist's site that output phpinfo(). I found that the xdebug module was listed in both sites, but that the 'xdebug.remote_enable' arg was 'On' for the "Local Value" of phplist but 'Off' for the "Local Value" of obi. In both sites, the "Master Value" of 'xdebug.remote_enable' was "Off". I guess this is a non-invasive as we can get, according to the xdebug docs this should disable debugging of xdebug *shrug* https://xdebug.org/docs/remote
    1. I re-disabled the phpinfo() function & restarted apache
  2. I can't get xdebug to actually dump info! why is this so complicated? I just want it to dump a ton of info about the stack trace to some log file, not connect to an IDE (who uses IDEs anyway? I'm not some .NET developer..)
    1. like, what files were executed before the 500? Which functions? Which lines?
  3. digging further into the phplist.org docs, their first entry in the "Common Installation Errors" talks about the dreaded 500 errors https://www.phplist.org/manual/ch033_common-installation-errors.xhtml
    1. it blames .htaccess files. I know this isn't the case, because if I just put a "<?php die('fail');" at the top of the 'index.php' file, then I simply get a 200 message with "fail" back. So it's deeper than that. Which is why I wanted to know what files/functions/lines were being touched here..
    2. in any case, I renamed lists/admin/.htaccess to /lists/admin/.bak_htaccess and lists/.htaccess to lists/.bak_htaccess. No changes.
  4. I put in a ton of options to try to get xdebug to print _something_

  1. but, alas, all it will tell me is
[Wed Aug 22 22:31:33.375599 2018] [:error] [pid 16765] [client 127.0.0.1:56112] XDebug could not open the remote debug file '0'.
  1. I googled that, but got no results. Someone had a similar error of "XDebug could not open the remote debug file ", but the bug was marked as "can't reproduce" *sigh*
<VirtualHost 127.0.0.1:8000>                                                                                                              
   ServerName phplist.opensourceecology.org   
...
   # enable xdebug                                                                                                                        
   php_flag xdebug.remote_enable on                                                                                                       
   php_flag xdebug.force_display_errors 1                                                                                                 
   php_flag xdebug.scream 1                                                                                                               
   php_flag xdebug.profiler_enable 1                                                                                                      
   php_flag xdebug.profiler_output_dir /tmp                                                                                               
   #php_flag xdebug.remote_log "/var/log/httpd/phplist.opensourceecology.org/xdebug.log"                                                  
   php_flag xdebug.remote_log /tmp/xdebug.log                                                                                             
   php_flag xdebug.remote_autostart 1                                                                                                     
																																		  
</VirtualHost>             
  1. I've tried playing with the string passed to remote_log. With single quotes. Double Quotes. No Quotes. Nothing works.
  2. fuck this. I'll resort to the old way of dumping echo statements at various places and refreshing the page over-and-over.
  3. ok, that worked. I dropped this line around. I just moved it lower & lower until the output 'test' didn't show up
echo 'test'; error_log( 'test1' );
  1. and this is the line that appears to be the issue. When I drop the above-listed line above this line, I see 'test' and a 200. When it's after, I get no output and a 500 error back from the server.
require_once dirname(FILE).'/defaultconfig.php'
  1. ok, then we dig into that file doing the same. fucking xdebug, why wouldn't you just do this for me?
  2. ok, so the issue is with the declaration of the huge var $default_config. It's an array spanning 622 lines! If I put my debug line before that, it's fine. After, I get a 500 error.
  3. checking modsec_audit.log, I see
--c3d2f17b-A--
[22/Aug/2018:23:33:50 +0000] W33y3v-FNVpZ-a69v@KBQAAAAAA 127.0.0.1 44136 127.0.0.1 8000
--c3d2f17b-B--
GET /lists/admin/index.php HTTP/1.0
X-Real-IP: 205.154.244.238
X-Forwarded-Proto: https
X-Forwarded-Port: 4443
Host: phplist.opensourceecology.org
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.75 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cookie: OSESESSION=c8o2s8vu3bkufgagkb8skdnv9kupbds3r52eia6d4oke5sku8peode7qrq15thsd5mboinicn5lr01jqrg3u3d9k34ac4eq5essith3
X-Forwarded-For: 205.154.244.238, 127.0.0.1
X-Varnish: 25778614

--c3d2f17b-F--
HTTP/1.0 500 Internal Server Error
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Expires: Thu, 19 Nov 1981 08:52:00 GMT
X-XSS-Protection: 1; mode=block
Content-Length: 5
Connection: close
Content-Type: text/html; charset=UTF-8

--c3d2f17b-E--

--c3d2f17b-H--
Apache-Handler: php5-script
Stopwatch: 1534980830384654 50920 (- - -)
Stopwatch2: 1534980830384654 50920; combined=835, p1=82, p2=687, p3=1, p4=38, p5=27, sr=20, sw=0, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED"

--c3d2f17b-Z--
    1. I'm not sure that's helpful.
  1. I started playing craps with that 622-line var = $default_vars. I found some issues with this line
	# remote processing secret                                                                                                               @TODO previous value generation was limited to 20 hex characters (max), determine if this is enough (80 bits)                      
	'remote_processing_secret' => array(                                                                                                  
		'value'       => bin2hex(random_bytes(10)),                                                                                       
		'description' => s('Secret for remote processing'),                                                                                       'type'        => 'text',                                                                                                          
		'category'    => 'security',                                                                                                      
	),  
  1. I added the following lines at the top of the script, and I saw the ouput of 'test0' only. Therefore, the issue is the use of the 'random_bytes()' function.
<?php                                                                                                                                     
echo 'test0';                                                                                                                             
$test=random_bytes(10);                                                                                                                   
echo 'test1';                                                                                                                             
$test=bin2hex($test);                                                                                                                     
echo 'test2';
  1. it looks like this random_bytes() function is not built-into php; it's a library included with the phplist project. The function is defined in 6x distinct files
[root@hetzner2 public_html]# grep -irl 'function random_bytes' *
lists/admin/inc/random_compat/random_bytes_dev_urandom.php
lists/admin/inc/random_compat/random_bytes_libsodium.php
lists/admin/inc/random_compat/random_bytes_mcrypt.php
lists/admin/inc/random_compat/random.php
lists/admin/inc/random_compat/random_bytes_com_dotnet.php
lists/admin/inc/random_compat/random_bytes_libsodium_legacy.php
[root@hetzner2 public_html]# 
  1. I'm not 100% sure which file is the culprit here, but the only result for 'random' when searching in the main admin index.php file is the shorest one = random.php
[root@hetzner2 public_html]# grep 'random' lists/admin/index.php 
require_once dirname(FILE).'/inc/random_compat/random.php';
[root@hetzner2 public_html]# 
  1. and there's the exception (in file = lists/admin/inc/random_compat/random.php)! It's literally throwing a fucking exception. Why can't xdebug just tell me this and dump a stack trace? Or php throw an error on this file/line? That would have saved so much time! Ugh. Anyway, when I changed the file to be this, I see 'test0testa'.
		/**                                                                                                                               
		 * throw new Exception                                                                                                            
		 */                                                                                                                               
		if (!is_callable('random_bytes')) {                                                                                               
			/**                                                                                                                           
			 * We don't have any more options, so let's throw an exception right now                                                      
			 * and hope the developer won't let it fail silently.                                                                         
			 */                                                                                                                           
			function random_bytes($length)                                                                                                
			{                                                                                                                             
echo 'testa';                                                                                                                             
				throw new Exception(                                                                                                      
					'There is no suitable CSPRNG installed on your system'                                                                
				);                                                                                                                        
echo 'testb';                                                                                                                             
			}                                                                                                                             
		}                                                                                                                                 
	}     
  1. googling this exception's contents lead me to here. The OP has similar language as me to describe how hard it was to find this, but they're running on php 5.3 on RHEL6. We're Cent7 at php 5.6, so not relevant https://discuss.phplist.org/t/phplist-3-3-1-required-php-version/2567
  2. I also landed on this page. It mentioned that there's a var "$er = error_reporting(0);" in admin/index.php. I changed that to 1, but to no avail. https://discuss.phplist.org/t/solved-3-3-1-not-reachable-error-500/2565/9
    1. I also changed init.php's line 9 to be "error_reporting(1);", and the exception popped-up in the error log, finally!
==> phplist.opensourceecology.org/error_log <==
[Thu Aug 23 00:06:29.560157 2018] [:error] [pid 17617] [client 127.0.0.1:51262] PHP Fatal error:  Uncaught exception 'Exception' with message 'There is no suitable CSPRNG installed on your system' in /var/www/html/phplist.opensourceecology.org/public_html/lists/admin/inc/random_compat/random.php:204\nStack trace:\n#0 /var/www/html/phplist.opensourceecology.org/public_html/lists/admin/defaultconfig.php(3): random_bytes(10)\n#1 /var/www/html/phplist.opensourceecology.org/public_html/lists/admin/index.php(103): require_once('/var/www/html/p...')\n#2 {main}\n  thrown in /var/www/html/phplist.opensourceecology.org/public_html/lists/admin/inc/random_compat/random.php on line 204

==> phplist.opensourceecology.org/access_log <==
127.0.0.1 - - [23/Aug/2018:00:06:29 +0000] "GET /lists/admin/index.php HTTP/1.0" 500 10 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.75 Safari/537.36"
  1. I went ahead and added another note to the discussion page for the "Troubleshooting Techniques" documentation page of phplist(which has been empty since it was created 3 years ago) asking them to include a note about these error_reporting() functions, which otherwise suppress healthy errors. I guess they assume that sysadmins are too dumb to disable error reporting being sent to clients. https://discuss.phplist.org/t/troubleshooting-techniques-manual-chapter-feedback-and-discussion/234/2
  2. this is related. It suggests adding /dev/urandom to open_basedir. That seems like it would come with some risk, so I'll have to look at the arguments against that more thoroughly https://github.com/paragonie/random_compat/issues/99
    1. that github appears the be the maintainer of the lib that's included in phplist
    2. it's also used in another project = kanboard, and they have a thread about this with the same conclusion https://github.com/kanboard/kanboard/issues/2060
    3. I found a note from the author that stated we could also overwrite the function that throws this exception with a file that uses openssl to generate the random bytes. They intentionally decided to distrust openssl. For the purposes of phplist, I generally think it would be better to have phplist use a broken openssl (not sure if it is tho) than compromise all our other sites by giving php access to /dev/urandom. Indeed, phplist is only going to send out public newsletters anyway. https://github.com/paragonie/ /releases/tag/v1.3.0
      1. I found this from here https://ourcodeworld.com/articles/read/214/how-to-solve-php-7-0-polyfill-there-is-no-suitable-csprng-installed-on-your-system-paragonie-random-compat
    4. according to this thread on the phplist list (heh), phplist will first try to use libsodium, then fread() which uses /dev/urandom, then a few other items. So I'll try to install libsodium https://sourceforge.net/p/phplist/mailman/phplist-developers/thread/72523e0e-4442-1642-e24d-0ceffd5de82a%40safeandsoundit.co.uk/
[root@hetzner2 public_html]# yum install php56w-pecl-libsodium
...
Installed:
  php56w-pecl-libsodium.x86_64 0:1.0.6-1.w7                                                                                               

Dependency Installed:
  libsodium.x86_64 0:1.0.16-1.el7                                     libsodium13.x86_64 0:1.0.5-1.el7                                    

Complete!
[root@hetzner2 public_html]# 
  1. unfortunately, that didn't work. After the below change, I see "checking if libsodium is accessibletest0testa" on refresh
	if (!is_callable('random_bytes')) {                                                                                                   
		/**                                                                                                                               
		 * PHP 5.2.0 - 5.6.x way to implement random_bytes()                                                                              
		 *                                                                                                                                
		 * We use conditional statements here to define the function in accordance                                                        
		 * to the operating environment. It's a micro-optimization.                                                                       
		 *                                                                                                                                
		 * In order of preference:                                                                                                        
		 *   1. Use libsodium if available.                                                                                               
		 *   2. fread() /dev/urandom if available (never on Windows)                                                                      
		 *   3. mcrypt_create_iv($bytes, MCRYPT_DEV_URANDOM)                                                                              
		 *   4. COM('CAPICOM.Utilities.1')->GetRandom()                                                                                   
		 *   5. openssl_random_pseudo_bytes() (absolute last resort)                                                                      
		 *                                                                                                                                
		 * See RATIONALE.md for our reasoning behind this particular order                                                                
		 */                                                                                                                               
echo 'checking if libsodium is accessible';                                                                                               
		if (extension_loaded('libsodium')) {                                                                                              
echo 'libsodium is accessible';                                                                                                           
			// See random_bytes_libsodium.php                                                                                             
			if (PHP_VERSION_ID >= 50300 && is_callable('\\Sodium\\randombytes_buf')) {                                                    
				require_once $RandomCompatDIR.'/random_bytes_libsodium.php';                                                              
			} elseif (method_exists('Sodium', 'randombytes_buf')) {                                                                       
				require_once $RandomCompatDIR.'/random_bytes_libsodium_legacy.php';                                                       
			}                                                                                                                             
		}   
  1. oh! oh! oh! I restarted apache, and suddenly phplist is returning an actual web ui! AWESOME!!!
  2. I clicked the button that said "Initialise Database"
  3. As requested, I created my account. It asked for a password for 'admin', so I generated one in the
    1. ugh, the password text field on phplist was a fucking text input, not a password one. That's like a super n00b mistake. This is a very, very bad red flag :(
  4. ok, phplist is installed! Yay!
  5. some of the links are broken, as it keeps sending me to "https://phplist.opensourceecology.org/lists/admin/", but it _should_ be "https://phplist.opensourceecology.org:4443/lists/admin/..."
  6. I tried changing the general settings to have a website address = "phplist.opensourceecology.org:4443", but the issue did not go away
  7. I did some more digging through config/config_extended.php, and I found a few options that I'm going to want to change
    1. UPLOADIMAGES_DIR
    2. $attachment_repository
    3. $tmpdir
  8. setting the following line in config.php fixed the port issue
define('HTTP_HOST','phplist.opensourceecology.org:4443');
  1. I changed the default TEST define of 0 to 1 so I could test phplist actually sending an email
  2. I attempted to send a new campaign, but it failed with a 403 issue from modsecurity id = 950901, sqli. I whitelisted it
==> phplist.opensourceecology.org/error_log <==
[Thu Aug 23 01:20:16.429107 2018] [:error] [pid 3124] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:([\\\\s'\\"`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98\\\\(\\\\)]*?)\\\\b([\\\\d\\\\w]++)([\\\\s'\\"`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98\\\\(\\\\)]*?)(?:(?:=|<=>|r?like|sounds\\\\s+like|regexp)([\\\\s'\\"`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98\\\\(\\\\)]*?)\\\\2\\\\b|(?:!=|<=|>=|<>|<|>|\\\\^|is\\\\s+not ..." at ARGS:message. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "77"] [id "950901"] [rev "2"] [msg "SQL Injection Attack: SQL Tautology Detected."] [data "Matched Data: p>yo found within ARGS:message: <p>yo yo. first email!</p>\\x0d\\x0a"] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] [hostname "phplist.opensourceecology.org"] [uri "/lists/admin/"] [unique_id "W34L0DwA4WOBpZueMx0GdQAAAAM"]

==> phplist.opensourceecology.org/access_log <==
127.0.0.1 - - [23/Aug/2018:01:20:16 +0000] "POST /lists/admin/?page=send&id=7&tk=48e5473a9b5e8d2267f5aa27338dbcd1 HTTP/1.0" 403 214 "https://phplist.opensourceecology.org:4443/lists/admin/?page=send&id=7&tk=48e5473a9b5e8d2267f5aa27338dbcd1" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.75 Safari/537.36"
    1. also for 950901, sqli
    2. and 973300, xss

Tue Aug 21, 2018

  1. the inventory job from yesterday was inaccessible again; trying again..
[root@hetzner2 ~]# aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id 'tg7RVs9SduwBh_0xsIdC0WZ4xpHQcJfyDCdgytXNs3qBs5s68f6KniCUmYygTyV1YM7OJ-3sKoZg54ZUMhDLQ-FoWso' output.json

An error occurred (ResourceNotFoundException) when calling the GetJobOutput operation: The job ID was not found: tg7RVs9SduwBh_0xsIdC0WZ4xpHQcJfyDCdgytXNs3qBs5s68f6KniCUmYygTyV1YM7OJ-3sKoZg54ZUMhDLQ-FoWso
[root@hetzner2 ~]# 
[root@hetzner2 ~]# aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}'{
	"location": "/099400651767/vaults/deleteMeIn2020/jobs/RHHVDhGRgJpG2t-ne49vKPZcZzGRSobq5XVw1pw4x7HuT1iWT9FFHC3xSl6bga_my22IEO8C5EbbxtbWBwUIslsHpYiC", 
	"jobId": "RHHVDhGRgJpG2t-ne49vKPZcZzGRSobq5XVw1pw4x7HuT1iWT9FFHC3xSl6bga_my22IEO8C5EbbxtbWBwUIslsHpYiC"
}
[root@hetzner2 ~]# 
  1. I updated the nginx & varnish configs so that the phplist vhost operates like munin, running on port 4443 with port 443 going to the generic certbot apache vhost for cert validation only
  2. I updated our cert to include phplist.opensourceecology.org
[root@hetzner2 sites-enabled]# certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/www.opensourceecology.org/htdocs/ -d opensourceecology.org  -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org  -w /var/www/html/staging.opensourceecology.org/htdocs -d staging.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/d3d.opensourceecology.org/htdocs/ -d d3d.opensourceecology.org -w /var/www/html/3dp.opensourceecology.org/htdocs/ -d 3dp.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d munin.opensourceecology.org -d phplist.opensourceecology.org 
  1. confirmed that this site now works, but it just says "You will probably want to replace this document with your own website." before redirecting me to https://www.phplist.com
    1. http://phplist.opensourceecology.org:4443/
  2. the phplist install doc tells you to drop the mysql password right into the docroot! https://www.phplist.org/manual/ch028_installation.xhtml
    1. there's a discussion page about this document here https://discuss.phplist.org/t/installing-phplist-manually-manual-chapter-feedback-and-discussion/212
    2. I registered for an account & asked the community if there was a hardening guide, specifically calling out the bad practice of storing the db password within the docroot.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hello,

Can someone tell me where I can find a guide to hardening a manual phplist install?

Specifically, I'm very concerned about the fact that the documentation page corresponding to this discussion thread (Installing phpList manually) tells us to use a config file (which contains the db password) that is located inside the document root. Unlike many similar web-based projects, phplist actually ships the tarball with a root directory and a ;public_html' directory inside of that. Why then is the config file not located in the root directory so it's located outside the document root?

Indeed, moving the config file outside the docroot is a very common hardening step. For example, see these links for doing so with mediawiki & wordpress:

 * https://www.mediawiki.org/wiki/Manual:Security#Move_sensitive_information
 * https://wordpress.stackexchange.com/questions/58391/is-moving-wp-config-outside-the-web-root-really-beneficial

As pointed out in the above links, config files containing passwords are moved out of the document root because:

  1. If there is some issue with the php engine of the web server, the content may be sent to the user in plaintext, clearly sending out the contents of the config file in plaintext. This becomes a non-issue if the config file is already located outside the docroot.

  2. Many editors save backups of the config file, such as 'config.php~' or '.config.php' It is very common for these files to linger if--for example--an ssh session terminated while editing the file. The result: the webserver may serve this backup of the config file in plaintext to the client. This becomes a non-issue if the config file is located outside the docroot, as the corresponding backup files will also exist outside the docroot.

Is there some guide published by phplist.org on how to harden a phplist install? If not, can one be created? Or, at least, can we update this page with instructions to move the config.php file outside the document root?


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIcBAEBCgAGBQJbfHeaAAoJEKo+33Fg4tl7FMMP/0H+bKkcD6wH9dhp2tK5RWl2
djUKMITjc0E5goEX5vXu2NueGPpgHvqajpPBLu6MEwltSUsHDlmYVHD4JxFnZSz5
WhEkfV3GKBc7OWFfWDE14eoaXXh0WrhIJL3KjSKmE6/zqusvbY7lbdQkTBsw26OJ
gmjyPC67r96Q/Josnc2JO6fVGpnDGGOF0V5NzHhaDsfQoZug/1Q1AYwvcU125DmL
fALBS5jUbGzrklj3yyaQ/9S5p1RlVgmNw/fgY59Id05dvFdo/EFJQAEUiVz+KnBe
bHQnbYC5KUEeXZrBIAwEVhCYVfPYBy7Rvaln9IMy20ElBOe8Lpy58x/DeFqTcC8C
NH8dlvSmLXWkDRRZImVlo+PvQynfONuyFy07F9hf+WpOLP/qE8t3Vb8VE2lZjuty
gqJlSaeUmVBlH71K0Im6QVpjVOeRr4aZ9jv05Y0QYV/yGREELpoJL1NDFFOY0tBg
U0ogdavKHPSpbyk4sgBHheO0F2ruEPoadm/gCqzGsppepXXq4UXfe7p09Z5/Q8Cg
63SqVb6Bwx3kerbcvBKphTKF6ERNIGDu/QFy4IqfZh0PwrPZQCE54shtg134qr7I
2wA9KoMT66WYxTq33kQS9E2wNw4DAUzZHtCKUbSjgFvPXt6S5habjej48zZssY13
gjV1+34WTnq9yfRPcgrZ
=8pRr
-----END PGP SIGNATURE-----
  1. because I'm a new user, the above post is pending approval.
  2. I also updated our Mediawiki & Wordpress pages with references to the above-listed links to clearly document why we move config files outside the docrot
  3. so the phplist documentation says we should hit '/lists/admin/' after the install https://www.phplist.org/manual/ch028_installation.xhtml
    1. unfortunately, I'm getting an error on this page at https://phplist.opensourceecology.org:4443/lists/admin/
Forbidden

You don't have permission to access /lists/admin/index.php on this server.

Additionally, a 500 Internal Server Error error was encountered while trying to use an ErrorDocument to handle the request.
    1. tailing the error log shows a modsecurity issue:
[root@hetzner2 httpd]# tail -f phplist.opensourceecology.org/access_log phplist.opensourceecology.org/error_log 
...
==> phplist.opensourceecology.org/error_log <==
[Tue Aug 21 21:20:59.020200 2018] [:error] [pid 5200] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 4). Pattern match "^5\\\\d{2}$" at RESPONSE_STATUS. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_50_outbound.conf"] [line "53"] [id "970901"] [rev "2"] [msg "The application is not available"] [data "Matched Data: 500 found within RESPONSE_STATUS: 500"] [severity "ERROR"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "9"] [tag "WASCTC/WASC-13"] [tag "OWASP_TOP_10/A6"] [tag "PCI/6.5.6"] [hostname "phplist.opensourceecology.org"] [uri "/lists/admin/index.php"] [unique_id "W3yCOqCwGyH5nqixmXZ9iAAAAAY"]
[Tue Aug 21 21:20:59.020301 2018] [:error] [pid 5200] [client 127.0.0.1] ModSecurity: Warning. Operator GE matched 4 at TX:outbound_anomaly_score. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_60_correlation.conf"] [line "40"] [id "981205"] [msg "Outbound Anomaly Score Exceeded (score 4): The application is not available"] [hostname "phplist.opensourceecology.org"] [uri "/lists/admin/index.php"] [unique_id "W3yCOqCwGyH5nqixmXZ9iAAAAAY"]

==> phplist.opensourceecology.org/access_log <==
127.0.0.1 - - [21/Aug/2018:21:20:58 +0000] "GET /lists/admin/ HTTP/1.1" 403 354 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0"
^C
[root@hetzner2 httpd]# 
  1. I updated the apache config file
    1. removed the include of the modsec exclusions for wordpress & added a vhost-specific block for this
    2. changed the block of 'wp-config.php' to block 'config.php'
  2. I dug through 'lists/config/config.php' & 'lists/config/config_extended.php'
    1. I was hoping to find some info about uploads dirs (so I could harden the permissions + apache vhost config accordingly), but I found nothing
    2. I found some sec settings in there, but nothing jumped out at me as needing changing
  3. trying to decide if I should htaccess-block our phplist site, I dug into the history of vulnerabilities for the phplist project https://www.cvedetails.com/vulnerability-list/vendor_id-2603/opec-1/Phplist.html
    1. indeed, I found that there was at least 2x sql injection bugs
    2. one of them (CVE-2012-3953) was published on 2012-07-11, fixed by phplist on 08-02, and disclosed on 08-08. This exploit did not require anyone to be logged-in. https://www.htbridge.com/advisory/HTB23100
  4. I updated the phplist vhost to whitelist the modsec rule id = 970901. It appears to be a generic blocking from the content, which (after I whitelisted it) says = "Cannot connect to database, access denied. Please check your configuration or contact the administrator.
  5. I went ahead and moved the config file to be outside the docroot and replaced the config file within the docroot with a dumb require of the new files' location
[root@hetzner2 phplist.opensourceecology.org]# date
Wed Aug 22 02:05:15 UTC 2018
[root@hetzner2 phplist.opensourceecology.org]# pwd
/var/www/html/phplist.opensourceecology.org
[root@hetzner2 phplist.opensourceecology.org]# mv public_html/lists/config/config.php .
[root@hetzner2 phplist.opensourceecology.org]# cd public_html/lists/config/
[root@hetzner2 config]# cat << EOF > config.php
<?php
# including separate file that contains the database password so that it is not stored within the document root. I took this from what I did with our mediawiki install.
# For more info see:
#  * https://www.mediawiki.org/wiki/Manual:Security
#  * https://wiki.r00tedvw.com/index.php/Mediawiki/Hardening

\$docRoot = dirname( FILE );
require_once "\$docRoot/../../../config.php";
?>
EOF
[root@hetzner2 config]# 
  1. I also updated the db name, user, and password in the real config.php file outside the docroot
  2. Now I just get an empty page with no error messages :( https://phplist.opensourceecology.org:4443/lists/admin/index.php
  3. I confirmed that the db config settings work
[root@hetzner2 phplist.opensourceecology.org]# mysql -hlocalhost -uphplist_user phplist_db -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3601997
Server version: 5.5.56-MariaDB MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [phplist_db]> show tables;
Empty set (0.00 sec)

MariaDB [phplist_db]> exit
Bye
[root@hetzner2 phplist.opensourceecology.org]# 
  1. ...
  1. finally, I snagged the inventory of glacier!
[root@hetzner2 ~]# aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id 'RHHVDhGRgJpG2t-ne49vKPZcZzGRSobq5XVw1pw4x7HuT1iWT9FFHC3xSl6bga_my22IEO8C5EbbxtbWBwUIslsHpYiC' output.json
{
	"status": 200, 
	"acceptRanges": "bytes", 
	"contentType": "application/json"
}
[root@hetzner2 ~]# 
  1. I downloaded it to my local machine for easier inspection
user@ose:/tmp$ scp opensourceecology.org:glacierInventory.20180821.txt .
glacierInventory.20180821.txt                 100%   24KB  22.9KB/s   00:01    
user@ose:/tmp$ cat glacierInventory.20180821.txt 
{"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020","InventoryDate":"2018-08-01T07:41:31Z","ArchiveList":[{"ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:35:48Z","Size":380236,"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"},{"ArchiveId":"lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:50:36Z","Size":280709,"SHA256TreeHash":"3f79016e6157ff3e1c9c853337b7a3e7359a9183ae9b26f1d03c1d1c594e45ab"},{"ArchiveId":"fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:53:00Z","Size":280718,"SHA256TreeHash":"6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4"},{"ArchiveId":"zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:55:04Z","Size":1187682789,"SHA256TreeHash":"c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f"},{"ArchiveId":"Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:57:50Z","Size":877058000,"SHA256TreeHash":"fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b"},{"ArchiveId":"P9wIGNBbLaAoz7xGht6Y4k7j33nGgPmg0RQ4sesN2tImQLjFN1dtkooVGrBnQqbPt8YhgvwUXv8eO_N72KRjS3RrZQYvkGxAQ9uPcJ-zaDOG8kII7l4p7UzGfaroO63ZreHItIW4GA","ArchiveDescription":"hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name","CreationDate":"2018-03-31T22:46:18Z","Size":2299038,"SHA256TreeHash":"2e789c8c99f08d338f8c1c2440afd76c23f76124c3dbdd33cbfa9f46f5c6b2aa"},{"ArchiveId":"o-naX0m4kQde-2i-8JZbEESi7r8OlFjIoDjgbQSXT_zt9L_e7qOH3HQ1R7ViQC3i7M0lVLbODsGZm9w9HfI3tHYKb2R1T_WWBwMxFuC_OhYiPX8uepTvvBg2Mg6KysP9H3zNzwGSZw","ArchiveDescription":"hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates","CreationDate":"2018-03-31T23:47:51Z","Size":12009829896,"SHA256TreeHash":"022f088abcfadefe7df5ac770f45f315ddee708f2470133ebd027ce988e1a45d"},{"ArchiveId":"mxeiPukWr03RpfDr49IRdJUaJNjIWQM4gdz8S8k3-_1VetpneyWZbwEVKCB1uMTYpPy0L6HZgZP7vJ6b7gz1oeszMnlzZR0-W6Rgt4O0BZ_mwgtGHRKOH0SIpMJHRnePaq9SBR9gew","ArchiveDescription":"hetzner1_20171001-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-01T20:20:52Z","Size":2309259,"SHA256TreeHash":"2d2711cf7f20b52a22d97b9ebe7b6a0bd45a3211842be66ca445b83fbc7984e5"},{"ArchiveId":"TOZBeL9sYVRtzy7gsAC1d930vcOhEBaABsh1ejb6vvad_NVSLu_1v0UvWqwkkf7x_8CCu6_WxolooSClZMhQOA21J_0_HP9GxvPkUvdSOeqmHjuANbIS82IRBOjFT4zFUoZnPhcVUg","ArchiveDescription":"hetzner1_20171001-052001.tar.gpg","CreationDate":"2018-04-01T21:42:48Z","Size":12235848201,"SHA256TreeHash":"a9868fdd4015fedbee5fb7b555a07b08d02299607eb64e73da689efb6bbad1ed"},{"ArchiveId":"LdlFgzhEnxVsuGMZU4d2c_rfMTGM_3iCvLUZZSpGmmLArCQLs8HxjWLwfDDeKPKEarvSgXOVA-Evy4Ep5WAzESoofG5jdCidL5OispSfHElpPu-60xbmNvQt9neLGZrwa3C_iESGiw","ArchiveDescription":"hetzner1_20171101-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-02T18:52:49Z","Size":2325511,"SHA256TreeHash":"920247e75ab48e16f18e7c0528778ea95ac0b74ffb18cdb3a68c0538d3e701e4"},{"ArchiveId":"6GHR8GlRG4EIlkA7O_Ta6BAXN3BQ7HmP0V7TgOp6bOa4cxuIlbHkmCd3I2lUSNwfG1penWOibFvvDhzgcihdmUMtCLepT3rl6HtFR5Lv-ro5mIegCcWQJOUDT0FRfsb7e7IkAze02Q","ArchiveDescription":"hetzner1_20171101-062001.tar.gpg","CreationDate":"2018-04-02T20:18:50Z","Size":12385858738,"SHA256TreeHash":"24c67d8686565c9f2b8b3eeacf2b4a0ec756a9f2092f44b28b56d2074d4ad364"},{"ArchiveId":"lryfyQFE4NbtWg5Q6uTq8Qqyc-y9il9WYe7lHs8H2lzFSBADOJQmCIgp6FxrkiaCcwnSMIReJPWyWcR4UOnurxwONhw8fojEHQTTeOpkf6fgfWBAPP9P6GOZZ0v8d8Jz_-QFVaV6Bw","ArchiveDescription":"hetzner1_20171201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-02T20:56:23Z","Size":2332970,"SHA256TreeHash":"366449cb278e2010864432c0f74a4d43bceff2e6a7b2827761b63d5e7e737a01"},{"ArchiveId":"O19wuK1PL_Wwf59-fjQuVP2Con0LXLf5Mk9xQA3HDPw4y1ZdwjYdFzmhZdaMUtGX666YKKjJu823l2C6seOTLg1ZbXZVTqQjZTeZGkQdCSRQdxyo3pEPWE2Iqpgb61FCiIETdCANUQ","ArchiveDescription":"hetzner2_20170702-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T12:29:09Z","Size":2039060,"SHA256TreeHash":"24df13a4150ab6ae7472727769677395c5e162660aea8aa0748833ad85c83e7a"},{"ArchiveId":"6ShVCMDoqdhc4wg84L1bXaq3O2InX-qB9Q9NMRH-xJQ0_TSlIN5b3fysow9-_RuNYc2lK958NrwFiIEa7Q0bVaT9LaZQH8WtoTqnX3DN2xJhb4_KUdu6iUaDdJUoPfsSXtC7xvPb-w","ArchiveDescription":"hetzner2_20170702-052001.tar.gpg","CreationDate":"2018-04-04T15:52:53Z","Size":21323056209,"SHA256TreeHash":"55030f294360adf1ed86e6e437a03432eb990c6d9c3e6b4a944004ad88d678e8"},{"ArchiveId":"0M5MSxjrlWJiT0XrncbVBITR__anuTLeOhcq9XvqsX0Q1koa0K0bH-wrZOQO7YsqqPv5Te3AUXPOCzIO6F0g5DQ2tOZq8E_YHX0XmMGjnOfeHIV9m_5GiCQAi3PrUuWM3C4cApTs7A","ArchiveDescription":"hetzner2_20170801-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T15:54:20Z","Size":198754,"SHA256TreeHash":"978f15fec9eef2171bdddf3da501b436d3bc3c7e518f2e70c0a1fd40ccf20d2a"},{"ArchiveId":"fwR6U5jX2T9N4mc14YQNMoA52vXICj-vvgIvYyDO5Qcv-pNeuXarT4gpzIy-XjuuF4KXkp9BXD13AA3hsau9PfW0ypy874m7arznCaMZO8ajm3NIicawZMiHGEikWw82EGY0z4VDIQ","ArchiveDescription":"hetzner2_20170801-072001.tar.gpg","CreationDate":"2018-04-04T16:08:27Z","Size":1746085455,"SHA256TreeHash":"6f3c5ede57e86991d646e577760197a01344bf013fb17a966fd7e2440f4b1062"},{"ArchiveId":"EZG83EoQ65jxe4ye0-0qszEqRjLE3lAb2Vi7vZ2eYvj1bVJnTc5kvfWgTxl4_w2G1PPk4pn6g2dIsYXosWk3OqWNaWNcYEOHEkNREHycnTpcl0rBkWJoimt9fCKLJCF7FiGavWUMSw","ArchiveDescription":"hetzner2_20170901-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:09:29Z","Size":287980,"SHA256TreeHash":"b72f11bb747ddb47d88f0566122a1190caf569b2b999ed22b8a98c6194ac4a0e"},{"ArchiveId":"5xqn4AAJhxnOHLJMfkvGX3Qksj5BTiEyHURILglfH0TPh_GfvbZNHqzdYIW-8sMtJ8OQ1GnnFqAOpty5mMwOSEjaokWkrQhEZK9-q7FBKDXXglAlqQKEJpd2UcTQI47zBEmGRasm-A","ArchiveDescription":"hetzner2_20170901-072001.tar.gpg","CreationDate":"2018-04-04T16:27:43Z","Size":1800393587,"SHA256TreeHash":"87400a80fc668b03ad712eaf8f6096172b5fc0aaa81309cc390dd34cc3caecec"},{"ArchiveId":"3XL4MENpH6i2Dp6micFWmMR2-qim3D1LQGiyQHME_5_A5jAbepw7WDEJOS2m2gIudSXfCuyclHTqzZYEpr6RwTGIEmYGw1jQ-EDPWYzjGTGDJcwWZEiklTmhLgvezqfyeSnQsdQZtA","ArchiveDescription":"hetzner2_20171001-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:29:10Z","Size":662050,"SHA256TreeHash":"506877424ae5304ca0c635d98fb8d01ad9183ec46356882edf02d61e9c48da8d"},{"ArchiveId":"g8RFNrkxynpQ8Yt9y4KyJra09dhxd3fIJxDlaUeDYBe615j7XON8gAdHMAQVerPQ4VF10obcuHnp64-kJFMmkG722hrlp3QBKy262CD4CcSUTSk3m070Mz6q3xySkcPzqRyxDwjtYg","ArchiveDescription":"hetzner2_20171001-072001.tar.gpg","CreationDate":"2018-04-04T16:51:09Z","Size":2648387053,"SHA256TreeHash":"1bf72e58a8301796d4f0a357a3f08d771da53875df4696ca201e81d1e8f0d82b"},{"ArchiveId":"ktHLXVqR5UxOoXEO5uRNMrIq4Jf2XrA6VmLQ0qgirJUeCler9Zcej90Qyg9bHvhQJPreilT4jwuW08oy7rZD_jnjd_2rcdZ11Y5Zl3V25lSKdRPM-b21o21kaBEr_ihhlIxOmPqJXg","ArchiveDescription":"hetzner2_20171101-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:51:40Z","Size":280741,"SHA256TreeHash":"f227ecd619df1564f2bb835029864fad804461db5496f6825a76e4720e3098a7"},{"ArchiveId":"iUmKTuLdEX3By9oHoqPtJd4KpEQ_2xh5PKV4LPuwBDcXyZmtt4zfq96djdQar1HwYmIh64bXEGqP7kGc0hk0ZtWZc12TtFUL0zohEbKBYr2VFZCQHjmc461TMLskKsOiyd6HbuKUWg","ArchiveDescription":"hetzner2_20171101-072001.tar.gpg","CreationDate":"2018-04-04T16:59:35Z","Size":878943524,"SHA256TreeHash":"7cf75fb3c67f0539142708a4ff9c57fdf7fd380283552fe5104e23f9a0656787"},{"ArchiveId":"6gmWP3OdBIdlRuPIbNpJj8AiaR-2Y4FaPTneD6ZwZY2352Wfp6_1YNha4qvO1lapuITAhjdh-GzKY5ybgJag8O4eh8jjtBKuOg3nrjbABpeS7e6Djc-7PEiMKskaeldv5M52gHFUiA","ArchiveDescription":"hetzner2_20171202-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:00:09Z","Size":313055,"SHA256TreeHash":"cfac22e7a2b59e28fe13fb37567d292e5ee1e9c06da264573091e26a2640a161"},{"ArchiveId":"4Ti7ZVFaexAncEDgak5Evp97aQk45VLA6cix3OCEB1cuGM6akGq2pINO8bzUjhEV8nvpqLLqoa_MSxPWTFl4uQ8sPUCDqG0vayB8PhYHcyNES09BQR9cE2HlR7qfxMDl5Ue946jcCw","ArchiveDescription":"hetzner2_20171202-072001.tar.gpg","CreationDate":"2018-04-04T17:12:23Z","Size":1046884902,"SHA256TreeHash":"d1d98730e5bb5058ac96f825770e5e2dbdbccb9788eee81a8f3d5cb01005d4e5"},{"ArchiveId":"GSWslpTGXPiYW5-gJZ4aLrFQGfDiTifPcqsSbh8CZc6T4K8_udBkSrNV0GNZQB9eLoRrUC5cXYT06FSvZ8kltgM61VUVYOXvO0ox4jYH68_sjHnkUmimk8itpa34hBC_c0zS0ZFRLQ","ArchiveDescription":"hetzner2_20180102-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:13:04Z","Size":499163,"SHA256TreeHash":"dfbc8647e0c402d1059691322ba9f830d005eae75d38840792b5cb58cd848c76"},{"ArchiveId":"3nDMsn_-0igfg6ncwMqx3-UxQLi-ug6LEoBxqyZKsMhd83PPoJk1cqn6QFib2GeyIgJzfCZoTlwrpe9O0_GnrM7u_mUEOsiKTCXP0NadvULehNcUx-2lWQpyRrCiDg5fcBb-f7tY0g","ArchiveDescription":"hetzner2_20180102-072001.tar.gpg","CreationDate":"2018-04-04T17:22:57Z","Size":1150541914,"SHA256TreeHash":"9ca7fa55632234d3195567dc384aaf2882348cccb032e7a467291f953351f178"},{"ArchiveId":"CnSvT3qmkPPY7exbsftSC-Ci71aqjLjiL1eUa3hYo3OfVkF4s2SQ8n39rH5KaQwo3GTHeJZOVoBTW9vMEf2ufYKc9e_eVAfVcmG-bLgncRQrrV-DlE2hYglzdAalx3H5OXBY8jlD9Q","ArchiveDescription":"hetzner2_20180202-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:31:24Z","Size":2480097,"SHA256TreeHash":"ae746d626f04882c2767e9db7dd1ffd735b3e08bc6de5877b0b23174f14cc1ff"},{"ArchiveId":"WWIYVa-hqJzMS8q1UNNZIKfLx1V3w3lzqpCLWwflYBM7yRocX2CEyFA-aY2EKJt0hRLTshgLXE3L3Sni8bYabDLBrV2Gehgq9reRTRhn8cxoKks4f1NmZwCCTSs6L4bQuJnjjNvOKw","ArchiveDescription":"hetzner2_20180302-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T18:36:50Z","Size":3530165,"SHA256TreeHash":"52f24fe7626401804799efc0407b892b7a0188f8f235d183228c036a5544d434"},{"ArchiveId":"XQYjqYnyYbKQWIzc1BSWQpn2K8mIoPQQH-bnoje7dB3BGCbzTjbEATGYSV1qJMbeUhiT_b7lwDiZzW1ZEbHVCgMDrWxCswG3eTZxiFdSwym7rELpFh5eC7XQlxuHjHocLY2zbUhYvg","ArchiveDescription":"hetzner2_20180401-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T22:19:13Z","Size":1617586,"SHA256TreeHash":"21c578c4b99abab6eb37cb754dd36cdcc71481613bf0031886cca81cd87c8d6b"},{"ArchiveId":"kn9SKSliFV1eHh_ax1Z9rEWXR5ETF3bhdoy6IuyItI3w63rBgxaNVNk5AFJLpcR2muktNFmsSEp8QucM-B4tMdFD6PtE4K8xPJe_Cvhv3G4e2TPKn3d9HMD5Bx3XjTimGHK6rHnz0A","ArchiveDescription":"hetzner2_20180401-072001.tar.gpg","CreationDate":"2018-04-04T22:43:39Z","Size":2910497961,"SHA256TreeHash":"e82e8df61655c53a27f049af8c97df48702d5385788fb26a02d37125d102196a"},{"ArchiveId":"4-Rebjng1gztwjx1x5L0Z1uErelURR5vmCUGD3sEW6rBQRUHRjyEQWL22JAm6YPpCoBwIxzVDPyC2NvSofxx2InjmixAUoQsyy3zAgGoW0nSlqNQPfeF1hkRdOCyIDutfMTQ1keEQw","ArchiveDescription":"hetzner1_20170701-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T00:36:36Z","Size":2430229,"SHA256TreeHash":"e84e7ff3fb15b1c5cf96b3f71ee87f88c39798aea3a900d295114e19ffa0c29f"},{"ArchiveId":"OVSNJHSIy5f1WRnisLdZ9ElWY4AjdgZwFqk3vDISCtypn5AHVo7wDGOAL76SpF0XzAd-yLgD3fIzf7mvgR4maA_HCANBhIP7Sdvhi7MLMjLnXLoKoHuKayBok_VLNRFfT5XORaTemA","ArchiveDescription":"hetzner1_20170801-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T03:52:16Z","Size":2485018,"SHA256TreeHash":"27ee0c5d5f20b87ff9c820dac1e5f3e989ab4ba679e94a8034a98d718564a5cd"},{"ArchiveId":"N1TB1zWhwJq20nTRNcIzVIRL9ms1KnszY0C4XAKhfTgtuWaV1SFWlqaA0xb6NjbX6N3XDisuP0bke-I0G_8RbsFQ_PcRTwRZzNEbr4LOU4WFhLM86s-FjDwjdJHmgyttfMh_1K9RLQ","ArchiveDescription":"hetzner1_20180101-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T07:28:56Z","Size":2349744,"SHA256TreeHash":"943aa9704177a35cd45ae11b705d9e238c9e6af1c86bc6ebed46c0ae9deff97a"},{"ArchiveId":"wJyG1vWz9IcB8-mnLm9bY3do9KIsxNY9nQ8ClQaOALesN-k3R5GU11p7Q3sVeStelg9IzWvburDcVFdHmJIYHC9RuRbuSZbk_rQvxxrkhtDcviu4i9_hN4SnPHvV3i0hITuiEFGpkA","ArchiveDescription":"hetzner1_20180201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T10:07:11Z","Size":2414523,"SHA256TreeHash":"704e59a90b38e1470a7a647f21a9c0e2887b84d73af9cc451f1b0b1c554b5cb7"},{"ArchiveId":"hPtzfNk9SSUpI-_KihUEQOb89sbrK3tr0-3au-pe7al_e8qetM7uQEbNTH4_oWPqD2yajF79XPXxi4wkqAcQjoAN4IhnkPVb846wODKTpFXkRs9V8lz6nW0t_GdR2c9uYXf-xM_MpQ","ArchiveDescription":"hetzner1_20180201-062001.tar.gpg","CreationDate":"2018-04-05T13:47:38Z","Size":28576802304,"SHA256TreeHash":"dd538e88ce29080099ee59b34a7739885732e1bb6dfa28fe7fa336eb3b367f47"},{"ArchiveId":"osvrVQsHYSGCO30f0kO9aneACAA8h80KBmqfBMqDG3RioepW6ndLlNBcSvhfQ2nrcWBwLabIn4A7Rkr7sjbddViPo92viBh4lyZdyDwVcm6Pp1hQv-p2j0vldxYLWpyLDflQ8QRn4A","ArchiveDescription":"hetzner1_20180301-062002.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T15:05:32Z","Size":2436441,"SHA256TreeHash":"b3e6651391632d17ecc729255472cd5affaea7c2c65585a5d71d258199d6af48"},{"ArchiveId":"OtlG0WN4qd8kIg3xRQvRHoAzICwHRg6S3I8df5r_VRKaUNzJCsnwbO8Z9RiJPAAqqqVqg9I_GKhnt7txvEdUjx5s9hLywWm_OcRm5Lj_rJV_dupUwVlTG8HsdnCIwFseGa1JD5bviw","ArchiveDescription":"hetzner1_20180301-062002.tar.gpg","CreationDate":"2018-04-05T18:57:24Z","Size":29224931379,"SHA256TreeHash":"3a6b009477ffe453f5460ab691709ce0dcdf6e9ae807a43339b61f0e6c5785ab"},{"ArchiveId":"2PAyQClvhEMhO-TxdAvV9Qdqa_Lvh4webx9hHIXbVnQQHJxMlhWPikmVpr1zTQRgy23r-WcOouH6gLKQ7WBRSH5yM8q5f8gb0Z2anOAwdR4A9DtxqDIVtI78-7Bs3Bf2b0fYbPQCWw","ArchiveDescription":"hetzner1_20180401-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T19:31:28Z","Size":2231140,"SHA256TreeHash":"a8a2712abf9434fa38d9aa3eb52c69accffdb4a2abe79425c3057d537b014a47"},{"ArchiveId":"Gn7a5jzeimXwa3su0i02OAK2XFmK9faX2WZx77Zq_tOx6j7ihpFEnkuF97Dpo66NgF7M24orh50kMSphvzLex_NbP9tDNoOI8mYG0-7GzOmNSmw9NaZpMLGn9NAVKbxs0byJ3YkquA","ArchiveDescription":"hetzner1_20180401-052001.tar.gpg","CreationDate":"2018-04-05T21:05:59Z","Size":12475250059,"SHA256TreeHash":"e256db8915834ddc3c096f1f3b9e4315bb56857b962722fb4093270459ed1116"},{"ArchiveId":"UqxNCpEu1twmhb9qLPxpQXMBv6yLyR37rZ1T_1tQjdl8x0RwukdIoOEGcmpHwdtrJgTA2OrWZ3ZYTncxkXojwWAOROW-wJ4SJANFfxwvGfueFNUSn17qTggcqeE43I5P1xmlxb25wg","ArchiveDescription":"hetzner1_20170701-052001.tar.gpg","CreationDate":"2018-04-07T19:26:56Z","Size":40953093076,"SHA256TreeHash":"5bf1d49a70b4031cb56b303be5bfed3321758bdf9242c944d5477eb4f3a15801"},{"ArchiveId":"NR3Z9zdD2rW0NG1y3QW735TzykIivP_cnFDMCNX6RcIPh0mRb_6QiC5qy1GrBTIoroorfzaGDIKQ0BY18jbcR3XfEzfcmrZ1FiT1YvQw-c1ag6vT46-noPvmddZ_zyy2O1ItIygI6Q","ArchiveDescription":"hetzner1_20171201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-07T21:02:35Z","Size":2333066,"SHA256TreeHash":"9775531693177494a3c515e0b3ab947f4fd6514d12d23cb297ff0b98bc09b1be"},{"ArchiveId":"3wjWOHj9f48-L180QRkg7onI5CbZcmaaqinUYJZRheCox-hc021rQ3Tl1Houf0s5W-qzk6HVRz3wkilQI_TAi2PXWaFUMibz00DAQfGj9ZQKeSOlxE_3qsIRcmYsYo-TMaU2UsSqNA","ArchiveDescription":"hetzner1_20171201-062001.tar.gpg","CreationDate":"2018-04-07T21:55:57Z","Size":12434596732,"SHA256TreeHash":"c10ce8134ffe35ba1e02d6076fc2d98f4bb3a288a5fe051fcb1e33684365ee19"},{"ArchiveId":"OfCmIMVetV8SxOBYUGFWldcHWFaFuGeLrYYm3A4YrvUU93zBrCLkOoBssToY1QIt_ZGwIueTgyoLTADetpfgswaoou_CwD8xfqss1hQAbQ7CaKW6sQHD-kcw4ii-D1h22lap95AZ4g","ArchiveDescription":"hetzner2_20180202-072001.tar.gpg","CreationDate":"2018-04-07T23:39:13Z","Size":14556435291,"SHA256TreeHash":"456b44a88a8485ceaf2080b15f0b6b7e6728caaec6edf86580369dfe91531df9"},{"ArchiveId":"PLs1lsB4c1dV3YaBG1y2SN3OEWmtImJVlz6CA6IknA6y3R8yfQV3FXcLXWC_YpczM6t05xigcynA7m1A6GkuHIyTDOr6-DCOLlEvxDHmFrA4hrzJkl2pLquNWJ9yc-JC83ZV4SkM-Q","ArchiveDescription":"hetzner2_20180302-072001.tar.gpg","CreationDate":"2018-04-08T01:47:51Z","Size":26269217831,"SHA256TreeHash":"c5b96e3419262c4c842bd3319d95dd07c30a1f00d57a2a2a6702d1d502868c98"},{"ArchiveId":"QwTHHmRo-NpqTTe2uy87GgB2MVydnz--3-3Z5u_0gdh5FPxEl2YSyjmJy3CKNDmJaNtrmwLeRF4_GubyZFc-CzlWl6OqZmINkCVSz34wY-k336C8HUOoKm5tPV3riSYaPb7WjjXwNQ","ArchiveDescription":"hetzner1_20170801-052001.tar.gpg","CreationDate":"2018-04-08T09:10:31Z","Size":41165282020,"SHA256TreeHash":"3fca81cf39f315fb2063714fffa526a533a7b3a556a6c4361a6ca3458ed66f29"},{"ArchiveId":"EmeH9kAWeVAyMa68pIknrJ135ZyXKB8WcjVKGQ58cVQE4Q98SMsX1OerOA4-_Q6epBJ_hgUT7ztFQ5d6PNiPRJ3H8uUIqXG3pkve5MaeA_cqAqvu4apBhU2HgALb1iS3NKy5IRdeUg","ArchiveDescription":"hetzner1_20180101-062001.tar.gpg","CreationDate":"2018-04-08T10:22:21Z","Size":12517449646,"SHA256TreeHash":"27e393d0ece9cadafa5902b26e0685c4b6990a95a7a18edbce5687a2e9ed4c55"},{"ArchiveId":"NX074yaGa7FGL_QH5W9-mZ9TVmi0T2428B1lW8aEck6Ydjk3H3W6pgQTisOqE9B7azs1jykJ_IL-fdbkLzhAmrpWNGJBq5hVjfMNSP-Msm976Mf7mnXe6Z6QDkO5PVXaFsNZ1EzNyw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:29:47Z","Size":258332,"SHA256TreeHash":"7541016c23b31391157a4151d9277746585b220afdf1e980cafe205474f796d4"},{"ArchiveId":"uHk-GTBb6LVulxOkgs_ZYdF-cvKubUpvdP7hoS9Cqduw8YPInJaHB4LbBHpIxOL1idfYoMm-h4YI_Jq8qN3EnOBHiAjqUEwJAstagfMEvk2E38IlNLu_5J_09E0JM7MZXc4RSEZfNA","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:35:50Z","Size":7277,"SHA256TreeHash":"f431faa85f3d3910395e012c9eeeba097142d778b80e1d5c0a596970cc9c2fe6"},{"ArchiveId":"n8UslfWy3wmFYZNYJF3PfuxVoLNORVes-IunJoyzKJDYMNqmkwybrG9KVGoL4sbRspq0Tqmccn87hLGZ_A7kjBB6fvnWuAOjALhNinbDe-RkESPVPWN6464vfCIf3BI3NhK0_nzCNw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:35:52Z","Size":258332,"SHA256TreeHash":"2846f515119510f2d03f84d7aadc8648575155d2388251d98160c30d0b67cce8"},{"ArchiveId":"lzlnWYAQWMFp32BM163QS_8kb9wJ_kqaal2XmVb_rXLRDDXhSogYZCanA7oWyi3IdlWECd8R3KT3s50gJo8_kckLtq2uUUjG3Yl1wJuvXQfVh1AwzPOtLlyldqXmDoiVFzw-NrkpIw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:40:35Z","Size":7277,"SHA256TreeHash":"9dd14621a97717597c34c23548da92bcdf68960758203d5a0897360c3211a10c"},{"ArchiveId":"WimEI6ABJtXx6j4jVK4lrVKWbPmI1iPZLErplyB7ChN6MSOH3yMOeANT7L3O6BBI4G17WjSIKE6EN6YgMP9OdgxF4XjHyyGUuWNwqy-nnIETKyp7YrFuuBkSiSloBhZkC6DRqpdrww","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:40:38Z","Size":258332,"SHA256TreeHash":"81c9683d541ede1b4646b19b2b3fa9be9891251a9599d2544392a5780edaa234"},{"ArchiveId":"k-Q9oBnWeC3P7zOEN6IMEVFjl3EwPkqi5kbEvEqby4zKEpb_aDj4f88Us1X7QBvG3Pi8GUriEnNlXXlNH5s4-4cBfQryVjY_MOAnSakhgCLXs-srczsWIZvtkkMsh4XFiBpVzYao3w","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:42:46Z","Size":7259,"SHA256TreeHash":"0aeed31f85186f67b0a5f457d4fbfe3d82e27fc8ccb5df8289a44602cb2b2b18"},{"ArchiveId":"Y7_nQQC2uSB7XXEfd_zaKpp_gqGPZ_TQTXDPLmSP8k77n9NImLnTL7apkE6AlJopAkgmPiLOaTgIXc4_mSkUFp5teSOxdPxk19Cvs2fL9S1Yv5U7wihZfrsrwNffyZl289J59G-UBg","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:42:49Z","Size":258332,"SHA256TreeHash":"c5b99e4ad85ce2eddda44a72243a69fa750ae25fdcffa26a70dcabfa1f4218c8"},{"ArchiveId":"zW6rdGwDojNoD-CjYUf8p_tbX2UMPHedXwUAM4GxNRkO0GoE1Ax5rpGr38LTnzZ_rCX-4F3kdJiAm1ahm-CfAzefUxenayuoS6cg384s5UHbZGsD2QpogBj9EJDDWlzrj8hr8DPC1Q","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:45:28Z","Size":7259,"SHA256TreeHash":"dcf5c7bfbdeb25d789671a951935bf5797212963c582099c5d9da75ae1ecfccd"},{"ArchiveId":"W9argz7v3GxZUkItGwILf1QRot8BNbE4kOJVvUrwOGs72KGog0QCGc8PV-3cWUvhfxkCFLuoZE7qJCQmT2Cc_LvaV46hWFFvgs5TFBdIySr2jeil-d8cYR5oN9zAvkYCGuvDlXmgxw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:45:31Z","Size":258328,"SHA256TreeHash":"75b3cdc350fb05d69874584ed6f3374a32b9142267ca916a0db6fc535711f24a"},{"ArchiveId":"T01giTZdzpQVhhijB47T97HEtIYDHTG7sVy5mpfUbaxBaGq5fU5C1aKleXpwTKOz7_aTiWAlkeM5rM3Lg_SS3qMI1JBeZR7l8M4W5a4JmFw3MVneRYZIC9JIuTO46F91SIa8JH1vgw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2.gpg","CreationDate":"2018-07-28T22:38:01Z","Size":829141,"SHA256TreeHash":"5a0fe91d11a8140b64bb9ddc3e1d8a90592e8eabe51b29bdf78bc3cdb4c97690"},{"ArchiveId":"GPXAjVjuNoyBjKU_Zx5wAxcjtLhsHwHXxKPuKDugGK3-jxNezXUG27MnJ5yDLay6yVJhZ_h3gCwlkd2y2gokIre6CK2wf2Ms3fk_m0BVkGI_Qx1PDRb7RL6P5l7yYeL1HWMhUfLFuw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2.gpg","CreationDate":"2018-07-28T23:03:36Z","Size":104307,"SHA256TreeHash":"224e7854656b88025535b9b5c72063e769f9f5eefd29a69b971b0fd01baba218"},{"ArchiveId":"o2o3n7hTDoRKwTwD2IN9bHRT7Ox1K1A3ZaauheIlvQyhycJgy4mSqRusieHvYihijY9hqWaIXXDLQMAn6xa55idBgAWkuLe_Px1xNeaae7uy3nNUOb5GgPoDr8YJ9lollj3cKd_iNw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseforum.tar.gpg","CreationDate":"2018-07-28T23:08:46Z","Size":1002685782,"SHA256TreeHash":"39c06b5421dec2acf803b1468ab11e63b60a143f5fe088097293911301f89d7e"},{"ArchiveId":"EM59cEYkyc2dTk6AZjPOWDG4ftxqTxXIM5RAMCgB8xP_wMaawcz8TY8ojij-zF9qve7Ae0grQqxe1R74HLA6Yh3R7UHMueMPThlUhpW_r2atTntGZTOkJVjevCoyCkG3P23wDckMXA","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2.gpg","CreationDate":"2018-07-28T23:13:16Z","Size":190625,"SHA256TreeHash":"816ad1bccadef1e08e399db362bc0210ede34e92f03734efb738560620591ba2"},{"ArchiveId":"vgOBLcSW8orHNlOyfPHT071fxpWtCu27wjyHoNx1Lq8V727HYmLX7JZyRXEBpszEYSKIdSU-X1DT1kzDlUeb5amFbcBU3E0s4qSja8fXz769bM89SwSNQ4gWYYgiqUqar6EbJZS2-A","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2.gpg","CreationDate":"2018-07-28T23:26:43Z","Size":101912,"SHA256TreeHash":"787b0f8f4ad5056804c4ab70dbf6420b21f334397e671da71de379df6a341db0"},{"ArchiveId":"rmnC1D8xwBN4PI90pLKD-Och9gluScd7c_3tVF9dLOlEPB8Lp_f2Y0m6YwGnmQkpkc43hrPwoaYTzQWOJRMBbhN0vdLc4RT1DRhfCE68HrQM7YzsEYY7ANf4h_lfFAE7mx1JGv16Cg","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osecivi.tar.gpg","CreationDate":"2018-07-28T23:26:48Z","Size":15254072,"SHA256TreeHash":"94a4edf8bbf92588bc7de908913198dab6db4a62bceabd124f00dfd0b337c577"},{"ArchiveId":"DBhGz8QPdNcS4Zp-jLpSOfE4sOk7tiLCQsA_rCJs9nQ1722YiaXeOLSThCvFn9RaUqfRj0UmomPE9_A2bdXbxuNi2Re3Q1uxav8HHR7RkJKdNMzYeTWbbpaDSNGyJZgYVKujqrT8TQ","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2.gpg","CreationDate":"2018-07-28T23:31:49Z","Size":4160970,"SHA256TreeHash":"8a316a67089a1333217091a8c5b25aabce0f66c8c36e9803dfb772d1fe23612b"},{"ArchiveId":"8lqDSeF3uiA9PcGb3W4hC1LwUBwT1oPLGColzsnKBq-0RpKZ4aVBMqcpKXlu5oYDGSrM4KjXnEk6ksgRkAtLuimSOWiMKUf-Nq34aDAVG5e1gmRIs8fgE2ghDaa6ToGi3os5-Q48zg","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg","CreationDate":"2018-08-01T00:04:11Z","Size":6641139501,"SHA256TreeHash":"bd60b18dfa0ae7f9bd293435882fc8ca307ad509c00e44ffe501b782284af2d8"},{"ArchiveId":"Q9FdpUiXA32lWVEa1Xdr0vigTynWsUX5nLkzvg6QCP7LsrWpOykIHrzSZIRdSubWKlJkZ5JR6eZgln7DnPV_Wso5JcFjRMP2L0TgpAMPoOSPsrP9uet3pLQPWr_ZP7aWISR7XLjhdg","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecationuser@ose:/tmp$ 
user@ose:/tmp$ 
  1. And some bash magic makes it even easier to inspect
user@ose:/tmp$ cat glacierInventory.20180821.txt | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020"
"InventoryDate":"2018-08-01T07:41:31Z"
"ArchiveList":["ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA"
"ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name"
"CreationDate":"2018-03-31T02:35:48Z"
"Size":380236
"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"
"ArchiveId":"lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw"
"ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name"
"CreationDate":"2018-03-31T02:50:36Z"
"Size":280709
"SHA256TreeHash":"3f79016e6157ff3e1c9c853337b7a3e7359a9183ae9b26f1d03c1d1c594e45ab"
"ArchiveId":"fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw"
"ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name"
"CreationDate":"2018-03-31T02:53:00Z"
"Size":280718
"SHA256TreeHash":"6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4"
"ArchiveId":"zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA"
"ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates"
"CreationDate":"2018-03-31T02:55:04Z"
"Size":1187682789
"SHA256TreeHash":"c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f"
"ArchiveId":"Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw"
"ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates"
"CreationDate":"2018-03-31T02:57:50Z"
"Size":877058000
"SHA256TreeHash":"fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b"
"ArchiveId":"P9wIGNBbLaAoz7xGht6Y4k7j33nGgPmg0RQ4sesN2tImQLjFN1dtkooVGrBnQqbPt8YhgvwUXv8eO_N72KRjS3RrZQYvkGxAQ9uPcJ-zaDOG8kII7l4p7UzGfaroO63ZreHItIW4GA"
"ArchiveDescription":"hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name"
"CreationDate":"2018-03-31T22:46:18Z"
"Size":2299038
"SHA256TreeHash":"2e789c8c99f08d338f8c1c2440afd76c23f76124c3dbdd33cbfa9f46f5c6b2aa"
"ArchiveId":"o-naX0m4kQde-2i-8JZbEESi7r8OlFjIoDjgbQSXT_zt9L_e7qOH3HQ1R7ViQC3i7M0lVLbODsGZm9w9HfI3tHYKb2R1T_WWBwMxFuC_OhYiPX8uepTvvBg2Mg6KysP9H3zNzwGSZw"
"ArchiveDescription":"hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates"
"CreationDate":"2018-03-31T23:47:51Z"
"Size":12009829896
"SHA256TreeHash":"022f088abcfadefe7df5ac770f45f315ddee708f2470133ebd027ce988e1a45d"
"ArchiveId":"mxeiPukWr03RpfDr49IRdJUaJNjIWQM4gdz8S8k3-_1VetpneyWZbwEVKCB1uMTYpPy0L6HZgZP7vJ6b7gz1oeszMnlzZR0-W6Rgt4O0BZ_mwgtGHRKOH0SIpMJHRnePaq9SBR9gew"
"ArchiveDescription":"hetzner1_20171001-052001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-01T20:20:52Z"
"Size":2309259
"SHA256TreeHash":"2d2711cf7f20b52a22d97b9ebe7b6a0bd45a3211842be66ca445b83fbc7984e5"
"ArchiveId":"TOZBeL9sYVRtzy7gsAC1d930vcOhEBaABsh1ejb6vvad_NVSLu_1v0UvWqwkkf7x_8CCu6_WxolooSClZMhQOA21J_0_HP9GxvPkUvdSOeqmHjuANbIS82IRBOjFT4zFUoZnPhcVUg"
"ArchiveDescription":"hetzner1_20171001-052001.tar.gpg"
"CreationDate":"2018-04-01T21:42:48Z"
"Size":12235848201
"SHA256TreeHash":"a9868fdd4015fedbee5fb7b555a07b08d02299607eb64e73da689efb6bbad1ed"
"ArchiveId":"LdlFgzhEnxVsuGMZU4d2c_rfMTGM_3iCvLUZZSpGmmLArCQLs8HxjWLwfDDeKPKEarvSgXOVA-Evy4Ep5WAzESoofG5jdCidL5OispSfHElpPu-60xbmNvQt9neLGZrwa3C_iESGiw"
"ArchiveDescription":"hetzner1_20171101-062001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-02T18:52:49Z"
"Size":2325511
"SHA256TreeHash":"920247e75ab48e16f18e7c0528778ea95ac0b74ffb18cdb3a68c0538d3e701e4"
"ArchiveId":"6GHR8GlRG4EIlkA7O_Ta6BAXN3BQ7HmP0V7TgOp6bOa4cxuIlbHkmCd3I2lUSNwfG1penWOibFvvDhzgcihdmUMtCLepT3rl6HtFR5Lv-ro5mIegCcWQJOUDT0FRfsb7e7IkAze02Q"
"ArchiveDescription":"hetzner1_20171101-062001.tar.gpg"
"CreationDate":"2018-04-02T20:18:50Z"
"Size":12385858738
"SHA256TreeHash":"24c67d8686565c9f2b8b3eeacf2b4a0ec756a9f2092f44b28b56d2074d4ad364"
"ArchiveId":"lryfyQFE4NbtWg5Q6uTq8Qqyc-y9il9WYe7lHs8H2lzFSBADOJQmCIgp6FxrkiaCcwnSMIReJPWyWcR4UOnurxwONhw8fojEHQTTeOpkf6fgfWBAPP9P6GOZZ0v8d8Jz_-QFVaV6Bw"
"ArchiveDescription":"hetzner1_20171201-062001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-02T20:56:23Z"
"Size":2332970
"SHA256TreeHash":"366449cb278e2010864432c0f74a4d43bceff2e6a7b2827761b63d5e7e737a01"
"ArchiveId":"O19wuK1PL_Wwf59-fjQuVP2Con0LXLf5Mk9xQA3HDPw4y1ZdwjYdFzmhZdaMUtGX666YKKjJu823l2C6seOTLg1ZbXZVTqQjZTeZGkQdCSRQdxyo3pEPWE2Iqpgb61FCiIETdCANUQ"
"ArchiveDescription":"hetzner2_20170702-052001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T12:29:09Z"
"Size":2039060
"SHA256TreeHash":"24df13a4150ab6ae7472727769677395c5e162660aea8aa0748833ad85c83e7a"
"ArchiveId":"6ShVCMDoqdhc4wg84L1bXaq3O2InX-qB9Q9NMRH-xJQ0_TSlIN5b3fysow9-_RuNYc2lK958NrwFiIEa7Q0bVaT9LaZQH8WtoTqnX3DN2xJhb4_KUdu6iUaDdJUoPfsSXtC7xvPb-w"
"ArchiveDescription":"hetzner2_20170702-052001.tar.gpg"
"CreationDate":"2018-04-04T15:52:53Z"
"Size":21323056209
"SHA256TreeHash":"55030f294360adf1ed86e6e437a03432eb990c6d9c3e6b4a944004ad88d678e8"
"ArchiveId":"0M5MSxjrlWJiT0XrncbVBITR__anuTLeOhcq9XvqsX0Q1koa0K0bH-wrZOQO7YsqqPv5Te3AUXPOCzIO6F0g5DQ2tOZq8E_YHX0XmMGjnOfeHIV9m_5GiCQAi3PrUuWM3C4cApTs7A"
"ArchiveDescription":"hetzner2_20170801-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T15:54:20Z"
"Size":198754
"SHA256TreeHash":"978f15fec9eef2171bdddf3da501b436d3bc3c7e518f2e70c0a1fd40ccf20d2a"
"ArchiveId":"fwR6U5jX2T9N4mc14YQNMoA52vXICj-vvgIvYyDO5Qcv-pNeuXarT4gpzIy-XjuuF4KXkp9BXD13AA3hsau9PfW0ypy874m7arznCaMZO8ajm3NIicawZMiHGEikWw82EGY0z4VDIQ"
"ArchiveDescription":"hetzner2_20170801-072001.tar.gpg"
"CreationDate":"2018-04-04T16:08:27Z"
"Size":1746085455
"SHA256TreeHash":"6f3c5ede57e86991d646e577760197a01344bf013fb17a966fd7e2440f4b1062"
"ArchiveId":"EZG83EoQ65jxe4ye0-0qszEqRjLE3lAb2Vi7vZ2eYvj1bVJnTc5kvfWgTxl4_w2G1PPk4pn6g2dIsYXosWk3OqWNaWNcYEOHEkNREHycnTpcl0rBkWJoimt9fCKLJCF7FiGavWUMSw"
"ArchiveDescription":"hetzner2_20170901-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T16:09:29Z"
"Size":287980
"SHA256TreeHash":"b72f11bb747ddb47d88f0566122a1190caf569b2b999ed22b8a98c6194ac4a0e"
"ArchiveId":"5xqn4AAJhxnOHLJMfkvGX3Qksj5BTiEyHURILglfH0TPh_GfvbZNHqzdYIW-8sMtJ8OQ1GnnFqAOpty5mMwOSEjaokWkrQhEZK9-q7FBKDXXglAlqQKEJpd2UcTQI47zBEmGRasm-A"
"ArchiveDescription":"hetzner2_20170901-072001.tar.gpg"
"CreationDate":"2018-04-04T16:27:43Z"
"Size":1800393587
"SHA256TreeHash":"87400a80fc668b03ad712eaf8f6096172b5fc0aaa81309cc390dd34cc3caecec"
"ArchiveId":"3XL4MENpH6i2Dp6micFWmMR2-qim3D1LQGiyQHME_5_A5jAbepw7WDEJOS2m2gIudSXfCuyclHTqzZYEpr6RwTGIEmYGw1jQ-EDPWYzjGTGDJcwWZEiklTmhLgvezqfyeSnQsdQZtA"
"ArchiveDescription":"hetzner2_20171001-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T16:29:10Z"
"Size":662050
"SHA256TreeHash":"506877424ae5304ca0c635d98fb8d01ad9183ec46356882edf02d61e9c48da8d"
"ArchiveId":"g8RFNrkxynpQ8Yt9y4KyJra09dhxd3fIJxDlaUeDYBe615j7XON8gAdHMAQVerPQ4VF10obcuHnp64-kJFMmkG722hrlp3QBKy262CD4CcSUTSk3m070Mz6q3xySkcPzqRyxDwjtYg"
"ArchiveDescription":"hetzner2_20171001-072001.tar.gpg"
"CreationDate":"2018-04-04T16:51:09Z"
"Size":2648387053
"SHA256TreeHash":"1bf72e58a8301796d4f0a357a3f08d771da53875df4696ca201e81d1e8f0d82b"
"ArchiveId":"ktHLXVqR5UxOoXEO5uRNMrIq4Jf2XrA6VmLQ0qgirJUeCler9Zcej90Qyg9bHvhQJPreilT4jwuW08oy7rZD_jnjd_2rcdZ11Y5Zl3V25lSKdRPM-b21o21kaBEr_ihhlIxOmPqJXg"
"ArchiveDescription":"hetzner2_20171101-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T16:51:40Z"
"Size":280741
"SHA256TreeHash":"f227ecd619df1564f2bb835029864fad804461db5496f6825a76e4720e3098a7"
"ArchiveId":"iUmKTuLdEX3By9oHoqPtJd4KpEQ_2xh5PKV4LPuwBDcXyZmtt4zfq96djdQar1HwYmIh64bXEGqP7kGc0hk0ZtWZc12TtFUL0zohEbKBYr2VFZCQHjmc461TMLskKsOiyd6HbuKUWg"
"ArchiveDescription":"hetzner2_20171101-072001.tar.gpg"
"CreationDate":"2018-04-04T16:59:35Z"
"Size":878943524
"SHA256TreeHash":"7cf75fb3c67f0539142708a4ff9c57fdf7fd380283552fe5104e23f9a0656787"
"ArchiveId":"6gmWP3OdBIdlRuPIbNpJj8AiaR-2Y4FaPTneD6ZwZY2352Wfp6_1YNha4qvO1lapuITAhjdh-GzKY5ybgJag8O4eh8jjtBKuOg3nrjbABpeS7e6Djc-7PEiMKskaeldv5M52gHFUiA"
"ArchiveDescription":"hetzner2_20171202-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T17:00:09Z"
"Size":313055
"SHA256TreeHash":"cfac22e7a2b59e28fe13fb37567d292e5ee1e9c06da264573091e26a2640a161"
"ArchiveId":"4Ti7ZVFaexAncEDgak5Evp97aQk45VLA6cix3OCEB1cuGM6akGq2pINO8bzUjhEV8nvpqLLqoa_MSxPWTFl4uQ8sPUCDqG0vayB8PhYHcyNES09BQR9cE2HlR7qfxMDl5Ue946jcCw"
"ArchiveDescription":"hetzner2_20171202-072001.tar.gpg"
"CreationDate":"2018-04-04T17:12:23Z"
"Size":1046884902
"SHA256TreeHash":"d1d98730e5bb5058ac96f825770e5e2dbdbccb9788eee81a8f3d5cb01005d4e5"
"ArchiveId":"GSWslpTGXPiYW5-gJZ4aLrFQGfDiTifPcqsSbh8CZc6T4K8_udBkSrNV0GNZQB9eLoRrUC5cXYT06FSvZ8kltgM61VUVYOXvO0ox4jYH68_sjHnkUmimk8itpa34hBC_c0zS0ZFRLQ"
"ArchiveDescription":"hetzner2_20180102-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T17:13:04Z"
"Size":499163
"SHA256TreeHash":"dfbc8647e0c402d1059691322ba9f830d005eae75d38840792b5cb58cd848c76"
"ArchiveId":"3nDMsn_-0igfg6ncwMqx3-UxQLi-ug6LEoBxqyZKsMhd83PPoJk1cqn6QFib2GeyIgJzfCZoTlwrpe9O0_GnrM7u_mUEOsiKTCXP0NadvULehNcUx-2lWQpyRrCiDg5fcBb-f7tY0g"
"ArchiveDescription":"hetzner2_20180102-072001.tar.gpg"
"CreationDate":"2018-04-04T17:22:57Z"
"Size":1150541914
"SHA256TreeHash":"9ca7fa55632234d3195567dc384aaf2882348cccb032e7a467291f953351f178"
"ArchiveId":"CnSvT3qmkPPY7exbsftSC-Ci71aqjLjiL1eUa3hYo3OfVkF4s2SQ8n39rH5KaQwo3GTHeJZOVoBTW9vMEf2ufYKc9e_eVAfVcmG-bLgncRQrrV-DlE2hYglzdAalx3H5OXBY8jlD9Q"
"ArchiveDescription":"hetzner2_20180202-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T17:31:24Z"
"Size":2480097
"SHA256TreeHash":"ae746d626f04882c2767e9db7dd1ffd735b3e08bc6de5877b0b23174f14cc1ff"
"ArchiveId":"WWIYVa-hqJzMS8q1UNNZIKfLx1V3w3lzqpCLWwflYBM7yRocX2CEyFA-aY2EKJt0hRLTshgLXE3L3Sni8bYabDLBrV2Gehgq9reRTRhn8cxoKks4f1NmZwCCTSs6L4bQuJnjjNvOKw"
"ArchiveDescription":"hetzner2_20180302-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T18:36:50Z"
"Size":3530165
"SHA256TreeHash":"52f24fe7626401804799efc0407b892b7a0188f8f235d183228c036a5544d434"
"ArchiveId":"XQYjqYnyYbKQWIzc1BSWQpn2K8mIoPQQH-bnoje7dB3BGCbzTjbEATGYSV1qJMbeUhiT_b7lwDiZzW1ZEbHVCgMDrWxCswG3eTZxiFdSwym7rELpFh5eC7XQlxuHjHocLY2zbUhYvg"
"ArchiveDescription":"hetzner2_20180401-072001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-04T22:19:13Z"
"Size":1617586
"SHA256TreeHash":"21c578c4b99abab6eb37cb754dd36cdcc71481613bf0031886cca81cd87c8d6b"
"ArchiveId":"kn9SKSliFV1eHh_ax1Z9rEWXR5ETF3bhdoy6IuyItI3w63rBgxaNVNk5AFJLpcR2muktNFmsSEp8QucM-B4tMdFD6PtE4K8xPJe_Cvhv3G4e2TPKn3d9HMD5Bx3XjTimGHK6rHnz0A"
"ArchiveDescription":"hetzner2_20180401-072001.tar.gpg"
"CreationDate":"2018-04-04T22:43:39Z"
"Size":2910497961
"SHA256TreeHash":"e82e8df61655c53a27f049af8c97df48702d5385788fb26a02d37125d102196a"
"ArchiveId":"4-Rebjng1gztwjx1x5L0Z1uErelURR5vmCUGD3sEW6rBQRUHRjyEQWL22JAm6YPpCoBwIxzVDPyC2NvSofxx2InjmixAUoQsyy3zAgGoW0nSlqNQPfeF1hkRdOCyIDutfMTQ1keEQw"
"ArchiveDescription":"hetzner1_20170701-052001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-05T00:36:36Z"
"Size":2430229
"SHA256TreeHash":"e84e7ff3fb15b1c5cf96b3f71ee87f88c39798aea3a900d295114e19ffa0c29f"
"ArchiveId":"OVSNJHSIy5f1WRnisLdZ9ElWY4AjdgZwFqk3vDISCtypn5AHVo7wDGOAL76SpF0XzAd-yLgD3fIzf7mvgR4maA_HCANBhIP7Sdvhi7MLMjLnXLoKoHuKayBok_VLNRFfT5XORaTemA"
"ArchiveDescription":"hetzner1_20170801-052001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-05T03:52:16Z"
"Size":2485018
"SHA256TreeHash":"27ee0c5d5f20b87ff9c820dac1e5f3e989ab4ba679e94a8034a98d718564a5cd"
"ArchiveId":"N1TB1zWhwJq20nTRNcIzVIRL9ms1KnszY0C4XAKhfTgtuWaV1SFWlqaA0xb6NjbX6N3XDisuP0bke-I0G_8RbsFQ_PcRTwRZzNEbr4LOU4WFhLM86s-FjDwjdJHmgyttfMh_1K9RLQ"
"ArchiveDescription":"hetzner1_20180101-062001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-05T07:28:56Z"
"Size":2349744
"SHA256TreeHash":"943aa9704177a35cd45ae11b705d9e238c9e6af1c86bc6ebed46c0ae9deff97a"
"ArchiveId":"wJyG1vWz9IcB8-mnLm9bY3do9KIsxNY9nQ8ClQaOALesN-k3R5GU11p7Q3sVeStelg9IzWvburDcVFdHmJIYHC9RuRbuSZbk_rQvxxrkhtDcviu4i9_hN4SnPHvV3i0hITuiEFGpkA"
"ArchiveDescription":"hetzner1_20180201-062001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-05T10:07:11Z"
"Size":2414523
"SHA256TreeHash":"704e59a90b38e1470a7a647f21a9c0e2887b84d73af9cc451f1b0b1c554b5cb7"
"ArchiveId":"hPtzfNk9SSUpI-_KihUEQOb89sbrK3tr0-3au-pe7al_e8qetM7uQEbNTH4_oWPqD2yajF79XPXxi4wkqAcQjoAN4IhnkPVb846wODKTpFXkRs9V8lz6nW0t_GdR2c9uYXf-xM_MpQ"
"ArchiveDescription":"hetzner1_20180201-062001.tar.gpg"
"CreationDate":"2018-04-05T13:47:38Z"
"Size":28576802304
"SHA256TreeHash":"dd538e88ce29080099ee59b34a7739885732e1bb6dfa28fe7fa336eb3b367f47"
"ArchiveId":"osvrVQsHYSGCO30f0kO9aneACAA8h80KBmqfBMqDG3RioepW6ndLlNBcSvhfQ2nrcWBwLabIn4A7Rkr7sjbddViPo92viBh4lyZdyDwVcm6Pp1hQv-p2j0vldxYLWpyLDflQ8QRn4A"
"ArchiveDescription":"hetzner1_20180301-062002.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-05T15:05:32Z"
"Size":2436441
"SHA256TreeHash":"b3e6651391632d17ecc729255472cd5affaea7c2c65585a5d71d258199d6af48"
"ArchiveId":"OtlG0WN4qd8kIg3xRQvRHoAzICwHRg6S3I8df5r_VRKaUNzJCsnwbO8Z9RiJPAAqqqVqg9I_GKhnt7txvEdUjx5s9hLywWm_OcRm5Lj_rJV_dupUwVlTG8HsdnCIwFseGa1JD5bviw"
"ArchiveDescription":"hetzner1_20180301-062002.tar.gpg"
"CreationDate":"2018-04-05T18:57:24Z"
"Size":29224931379
"SHA256TreeHash":"3a6b009477ffe453f5460ab691709ce0dcdf6e9ae807a43339b61f0e6c5785ab"
"ArchiveId":"2PAyQClvhEMhO-TxdAvV9Qdqa_Lvh4webx9hHIXbVnQQHJxMlhWPikmVpr1zTQRgy23r-WcOouH6gLKQ7WBRSH5yM8q5f8gb0Z2anOAwdR4A9DtxqDIVtI78-7Bs3Bf2b0fYbPQCWw"
"ArchiveDescription":"hetzner1_20180401-052001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-05T19:31:28Z"
"Size":2231140
"SHA256TreeHash":"a8a2712abf9434fa38d9aa3eb52c69accffdb4a2abe79425c3057d537b014a47"
"ArchiveId":"Gn7a5jzeimXwa3su0i02OAK2XFmK9faX2WZx77Zq_tOx6j7ihpFEnkuF97Dpo66NgF7M24orh50kMSphvzLex_NbP9tDNoOI8mYG0-7GzOmNSmw9NaZpMLGn9NAVKbxs0byJ3YkquA"
"ArchiveDescription":"hetzner1_20180401-052001.tar.gpg"
"CreationDate":"2018-04-05T21:05:59Z"
"Size":12475250059
"SHA256TreeHash":"e256db8915834ddc3c096f1f3b9e4315bb56857b962722fb4093270459ed1116"
"ArchiveId":"UqxNCpEu1twmhb9qLPxpQXMBv6yLyR37rZ1T_1tQjdl8x0RwukdIoOEGcmpHwdtrJgTA2OrWZ3ZYTncxkXojwWAOROW-wJ4SJANFfxwvGfueFNUSn17qTggcqeE43I5P1xmlxb25wg"
"ArchiveDescription":"hetzner1_20170701-052001.tar.gpg"
"CreationDate":"2018-04-07T19:26:56Z"
"Size":40953093076
"SHA256TreeHash":"5bf1d49a70b4031cb56b303be5bfed3321758bdf9242c944d5477eb4f3a15801"
"ArchiveId":"NR3Z9zdD2rW0NG1y3QW735TzykIivP_cnFDMCNX6RcIPh0mRb_6QiC5qy1GrBTIoroorfzaGDIKQ0BY18jbcR3XfEzfcmrZ1FiT1YvQw-c1ag6vT46-noPvmddZ_zyy2O1ItIygI6Q"
"ArchiveDescription":"hetzner1_20171201-062001.fileList.txt.bz2.gpg"
"CreationDate":"2018-04-07T21:02:35Z"
"Size":2333066
"SHA256TreeHash":"9775531693177494a3c515e0b3ab947f4fd6514d12d23cb297ff0b98bc09b1be"
"ArchiveId":"3wjWOHj9f48-L180QRkg7onI5CbZcmaaqinUYJZRheCox-hc021rQ3Tl1Houf0s5W-qzk6HVRz3wkilQI_TAi2PXWaFUMibz00DAQfGj9ZQKeSOlxE_3qsIRcmYsYo-TMaU2UsSqNA"
"ArchiveDescription":"hetzner1_20171201-062001.tar.gpg"
"CreationDate":"2018-04-07T21:55:57Z"
"Size":12434596732
"SHA256TreeHash":"c10ce8134ffe35ba1e02d6076fc2d98f4bb3a288a5fe051fcb1e33684365ee19"
"ArchiveId":"OfCmIMVetV8SxOBYUGFWldcHWFaFuGeLrYYm3A4YrvUU93zBrCLkOoBssToY1QIt_ZGwIueTgyoLTADetpfgswaoou_CwD8xfqss1hQAbQ7CaKW6sQHD-kcw4ii-D1h22lap95AZ4g"
"ArchiveDescription":"hetzner2_20180202-072001.tar.gpg"
"CreationDate":"2018-04-07T23:39:13Z"
"Size":14556435291
"SHA256TreeHash":"456b44a88a8485ceaf2080b15f0b6b7e6728caaec6edf86580369dfe91531df9"
"ArchiveId":"PLs1lsB4c1dV3YaBG1y2SN3OEWmtImJVlz6CA6IknA6y3R8yfQV3FXcLXWC_YpczM6t05xigcynA7m1A6GkuHIyTDOr6-DCOLlEvxDHmFrA4hrzJkl2pLquNWJ9yc-JC83ZV4SkM-Q"
"ArchiveDescription":"hetzner2_20180302-072001.tar.gpg"
"CreationDate":"2018-04-08T01:47:51Z"
"Size":26269217831
"SHA256TreeHash":"c5b96e3419262c4c842bd3319d95dd07c30a1f00d57a2a2a6702d1d502868c98"
"ArchiveId":"QwTHHmRo-NpqTTe2uy87GgB2MVydnz--3-3Z5u_0gdh5FPxEl2YSyjmJy3CKNDmJaNtrmwLeRF4_GubyZFc-CzlWl6OqZmINkCVSz34wY-k336C8HUOoKm5tPV3riSYaPb7WjjXwNQ"
"ArchiveDescription":"hetzner1_20170801-052001.tar.gpg"
"CreationDate":"2018-04-08T09:10:31Z"
"Size":41165282020
"SHA256TreeHash":"3fca81cf39f315fb2063714fffa526a533a7b3a556a6c4361a6ca3458ed66f29"
"ArchiveId":"EmeH9kAWeVAyMa68pIknrJ135ZyXKB8WcjVKGQ58cVQE4Q98SMsX1OerOA4-_Q6epBJ_hgUT7ztFQ5d6PNiPRJ3H8uUIqXG3pkve5MaeA_cqAqvu4apBhU2HgALb1iS3NKy5IRdeUg"
"ArchiveDescription":"hetzner1_20180101-062001.tar.gpg"
"CreationDate":"2018-04-08T10:22:21Z"
"Size":12517449646
"SHA256TreeHash":"27e393d0ece9cadafa5902b26e0685c4b6990a95a7a18edbce5687a2e9ed4c55"
"ArchiveId":"NX074yaGa7FGL_QH5W9-mZ9TVmi0T2428B1lW8aEck6Ydjk3H3W6pgQTisOqE9B7azs1jykJ_IL-fdbkLzhAmrpWNGJBq5hVjfMNSP-Msm976Mf7mnXe6Z6QDkO5PVXaFsNZ1EzNyw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:29:47Z"
"Size":258332
"SHA256TreeHash":"7541016c23b31391157a4151d9277746585b220afdf1e980cafe205474f796d4"
"ArchiveId":"uHk-GTBb6LVulxOkgs_ZYdF-cvKubUpvdP7hoS9Cqduw8YPInJaHB4LbBHpIxOL1idfYoMm-h4YI_Jq8qN3EnOBHiAjqUEwJAstagfMEvk2E38IlNLu_5J_09E0JM7MZXc4RSEZfNA"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-17T00:35:50Z"
"Size":7277
"SHA256TreeHash":"f431faa85f3d3910395e012c9eeeba097142d778b80e1d5c0a596970cc9c2fe6"
"ArchiveId":"n8UslfWy3wmFYZNYJF3PfuxVoLNORVes-IunJoyzKJDYMNqmkwybrG9KVGoL4sbRspq0Tqmccn87hLGZ_A7kjBB6fvnWuAOjALhNinbDe-RkESPVPWN6464vfCIf3BI3NhK0_nzCNw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:35:52Z"
"Size":258332
"SHA256TreeHash":"2846f515119510f2d03f84d7aadc8648575155d2388251d98160c30d0b67cce8"
"ArchiveId":"lzlnWYAQWMFp32BM163QS_8kb9wJ_kqaal2XmVb_rXLRDDXhSogYZCanA7oWyi3IdlWECd8R3KT3s50gJo8_kckLtq2uUUjG3Yl1wJuvXQfVh1AwzPOtLlyldqXmDoiVFzw-NrkpIw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-17T00:40:35Z"
"Size":7277
"SHA256TreeHash":"9dd14621a97717597c34c23548da92bcdf68960758203d5a0897360c3211a10c"
"ArchiveId":"WimEI6ABJtXx6j4jVK4lrVKWbPmI1iPZLErplyB7ChN6MSOH3yMOeANT7L3O6BBI4G17WjSIKE6EN6YgMP9OdgxF4XjHyyGUuWNwqy-nnIETKyp7YrFuuBkSiSloBhZkC6DRqpdrww"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:40:38Z"
"Size":258332
"SHA256TreeHash":"81c9683d541ede1b4646b19b2b3fa9be9891251a9599d2544392a5780edaa234"
"ArchiveId":"k-Q9oBnWeC3P7zOEN6IMEVFjl3EwPkqi5kbEvEqby4zKEpb_aDj4f88Us1X7QBvG3Pi8GUriEnNlXXlNH5s4-4cBfQryVjY_MOAnSakhgCLXs-srczsWIZvtkkMsh4XFiBpVzYao3w"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-17T00:42:46Z"
"Size":7259
"SHA256TreeHash":"0aeed31f85186f67b0a5f457d4fbfe3d82e27fc8ccb5df8289a44602cb2b2b18"
"ArchiveId":"Y7_nQQC2uSB7XXEfd_zaKpp_gqGPZ_TQTXDPLmSP8k77n9NImLnTL7apkE6AlJopAkgmPiLOaTgIXc4_mSkUFp5teSOxdPxk19Cvs2fL9S1Yv5U7wihZfrsrwNffyZl289J59G-UBg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:42:49Z"
"Size":258332
"SHA256TreeHash":"c5b99e4ad85ce2eddda44a72243a69fa750ae25fdcffa26a70dcabfa1f4218c8"
"ArchiveId":"zW6rdGwDojNoD-CjYUf8p_tbX2UMPHedXwUAM4GxNRkO0GoE1Ax5rpGr38LTnzZ_rCX-4F3kdJiAm1ahm-CfAzefUxenayuoS6cg384s5UHbZGsD2QpogBj9EJDDWlzrj8hr8DPC1Q"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-17T00:45:28Z"
"Size":7259
"SHA256TreeHash":"dcf5c7bfbdeb25d789671a951935bf5797212963c582099c5d9da75ae1ecfccd"
"ArchiveId":"W9argz7v3GxZUkItGwILf1QRot8BNbE4kOJVvUrwOGs72KGog0QCGc8PV-3cWUvhfxkCFLuoZE7qJCQmT2Cc_LvaV46hWFFvgs5TFBdIySr2jeil-d8cYR5oN9zAvkYCGuvDlXmgxw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:45:31Z"
"Size":258328
"SHA256TreeHash":"75b3cdc350fb05d69874584ed6f3374a32b9142267ca916a0db6fc535711f24a"
"ArchiveId":"T01giTZdzpQVhhijB47T97HEtIYDHTG7sVy5mpfUbaxBaGq5fU5C1aKleXpwTKOz7_aTiWAlkeM5rM3Lg_SS3qMI1JBeZR7l8M4W5a4JmFw3MVneRYZIC9JIuTO46F91SIa8JH1vgw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-28T22:38:01Z"
"Size":829141
"SHA256TreeHash":"5a0fe91d11a8140b64bb9ddc3e1d8a90592e8eabe51b29bdf78bc3cdb4c97690"
"ArchiveId":"GPXAjVjuNoyBjKU_Zx5wAxcjtLhsHwHXxKPuKDugGK3-jxNezXUG27MnJ5yDLay6yVJhZ_h3gCwlkd2y2gokIre6CK2wf2Ms3fk_m0BVkGI_Qx1PDRb7RL6P5l7yYeL1HWMhUfLFuw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-28T23:03:36Z"
"Size":104307
"SHA256TreeHash":"224e7854656b88025535b9b5c72063e769f9f5eefd29a69b971b0fd01baba218"
"ArchiveId":"o2o3n7hTDoRKwTwD2IN9bHRT7Ox1K1A3ZaauheIlvQyhycJgy4mSqRusieHvYihijY9hqWaIXXDLQMAn6xa55idBgAWkuLe_Px1xNeaae7uy3nNUOb5GgPoDr8YJ9lollj3cKd_iNw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseforum.tar.gpg"
"CreationDate":"2018-07-28T23:08:46Z"
"Size":1002685782
"SHA256TreeHash":"39c06b5421dec2acf803b1468ab11e63b60a143f5fe088097293911301f89d7e"
"ArchiveId":"EM59cEYkyc2dTk6AZjPOWDG4ftxqTxXIM5RAMCgB8xP_wMaawcz8TY8ojij-zF9qve7Ae0grQqxe1R74HLA6Yh3R7UHMueMPThlUhpW_r2atTntGZTOkJVjevCoyCkG3P23wDckMXA"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-28T23:13:16Z"
"Size":190625
"SHA256TreeHash":"816ad1bccadef1e08e399db362bc0210ede34e92f03734efb738560620591ba2"
"ArchiveId":"vgOBLcSW8orHNlOyfPHT071fxpWtCu27wjyHoNx1Lq8V727HYmLX7JZyRXEBpszEYSKIdSU-X1DT1kzDlUeb5amFbcBU3E0s4qSja8fXz769bM89SwSNQ4gWYYgiqUqar6EbJZS2-A"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-28T23:26:43Z"
"Size":101912
"SHA256TreeHash":"787b0f8f4ad5056804c4ab70dbf6420b21f334397e671da71de379df6a341db0"
"ArchiveId":"rmnC1D8xwBN4PI90pLKD-Och9gluScd7c_3tVF9dLOlEPB8Lp_f2Y0m6YwGnmQkpkc43hrPwoaYTzQWOJRMBbhN0vdLc4RT1DRhfCE68HrQM7YzsEYY7ANf4h_lfFAE7mx1JGv16Cg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osecivi.tar.gpg"
"CreationDate":"2018-07-28T23:26:48Z"
"Size":15254072
"SHA256TreeHash":"94a4edf8bbf92588bc7de908913198dab6db4a62bceabd124f00dfd0b337c577"
"ArchiveId":"DBhGz8QPdNcS4Zp-jLpSOfE4sOk7tiLCQsA_rCJs9nQ1722YiaXeOLSThCvFn9RaUqfRj0UmomPE9_A2bdXbxuNi2Re3Q1uxav8HHR7RkJKdNMzYeTWbbpaDSNGyJZgYVKujqrT8TQ"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-28T23:31:49Z"
"Size":4160970
"SHA256TreeHash":"8a316a67089a1333217091a8c5b25aabce0f66c8c36e9803dfb772d1fe23612b"
"ArchiveId":"8lqDSeF3uiA9PcGb3W4hC1LwUBwT1oPLGColzsnKBq-0RpKZ4aVBMqcpKXlu5oYDGSrM4KjXnEk6ksgRkAtLuimSOWiMKUf-Nq34aDAVG5e1gmRIs8fgE2ghDaa6ToGi3os5-Q48zg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg"
"CreationDate":"2018-08-01T00:04:11Z"
"Size":6641139501
"SHA256TreeHash":"bd60b18dfa0ae7f9bd293435882fc8ca307ad509c00e44ffe501b782284af2d8"
"ArchiveId":"Q9FdpUiXA32lWVEa1Xdr0vigTynWsUX5nLkzvg6QCP7LsrWpOykIHrzSZIRdSubWKlJkZ5JR6eZgln7DnPV_Wso5JcFjRMP2L0TgpAMPoOSPsrP9uet3pLQPWr_ZP7aWISR7XLjhdg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseblog.tar.gpg"
"CreationDate":"2018-08-01T00:21:55Z"
"Size":4702431197
"SHA256TreeHash":"c5d5d05b5153dfbc34cd6787e0adbc2837aa1d15a857bfdbede2be435b872717"
"ArchiveId":"e2bM8CyOWaQWtrZXk3OahrVeNpateHCkkyBd4UkBPuaJPz-HNdlnrVMA9M4nZdhqTdNMVfpsLK0HEIWxBT8zVJaHKMRdDWKSs9rb86gxyDjyIX6m4oSlik3I6EC1_ZMFhmpYKlMrPQ"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osemain.tar.gpg"
"CreationDate":"2018-08-01T00:32:59Z"
"Size":3522007765
"SHA256TreeHash":"3535274728478275bbc67b1c5e7d682b27f5ebeba0c86760953f823ecbedc638"]
user@ose:/tmp$ 
  1. and a bit of grepping shows that all the archives we want are present
user@ose:/tmp$ cat glacierInventory.20180821.txt | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' | grep -Ei 'microft|oseblog|osecivi|oseforum|osemain|osesurv' | sort
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseblog.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osecivi.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_oseforum.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osemain.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
user@ose:/tmp$ 
    1. note again that the duplicates of osesurv were early tests. The sizes are very small, so the costs of this duplicate data in glacier is negligible.
  1. ok, I'm finally ready to cancel our hetzner1 contract. Question is: when is the next bill due?
  2. I sent an email to Marcin & Cataraina. After some back-and-forth, Marcin told me to go ahead and terminate the contract. I told him I would pull the trigger on Thr Aug 23rd, so he has at least 24 hours to tell me to abort. I documented this on the CHG https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation#Status

Mon Aug 20, 2018

  1. Marcin said the final name for the new site is 'microfactory.opensourceecology.org', per my suggestion
  2. the inventory check for backups on glaicer timed-out again. I'll try again.
[root@hetzner2 ~]# aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id 'c84MNJlg2CA0V1NYXezXhppBpCSxmK_UjcE6_eXthqNI7956x1AVafRrnfNFvvr07WWWyJK7_rLM5beM9tmCMbGFWuE_' output.json

An error occurred (ResourceNotFoundException) when calling the GetJobOutput operation: The job ID was not found: c84MNJlg2CA0V1NYXezXhppBpCSxmK_UjcE6_eXthqNI7956x1AVafRrnfNFvvr07WWWyJK7_rLM5beM9tmCMbGFWuE_
[root@hetzner2 ~]# 
[root@hetzner2 ~]# aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}'
{
	"location": "/099400651767/vaults/deleteMeIn2020/jobs/btg7RVs9SduwBh_0xsIdC0WZ4xpHQcJfyDCdgytXNs3qBs5s68f6KniCUmYygTyV1YM7OJ-3sKoZg54ZUMhDLQ-FoWso", 
	"jobId": "btg7RVs9SduwBh_0xsIdC0WZ4xpHQcJfyDCdgytXNs3qBs5s68f6KniCUmYygTyV1YM7OJ-3sKoZg54ZUMhDLQ-FoWso"
}
[root@hetzner2 ~]# 
  1. continuing my phplist research, I looked at all the alternativesto.net listings for open source alternatives to phplist https://alternativeto.net/software/phplist/?license=opensource
    1. Mailman https://www.gnu.org/software/mailman/index.html
    2. Sympa https://alternativeto.net/software/sympa/
    3. OpenEMM https://www.openemm.org/
      1. + appears to have lots of good features
    4. mlmmj http://mlmmj.org/
    5. MailCtlr http://www.ctlr.it/
      1. - appears to have only 1 dev without a web interface
    6. Gutama http://gutuma.com/
      1. - website is broken & project is discontinued
    7. iReach
      1. - based on ruby. ick.
  2. phplist does appear to be the best option. It even has a/b testing!
    1. they also have PGP support, support PFS & HSTS. This doesn't look like security theater; yay! https://www.phplist.com/features
  3. oh wait, that was from phplist.com instead of phplist.org. Simliar to wordpress's .com vs .org I guess
  4. phplist.org also lists a/b testing! https://www.phplist.org/features/#block-rich
  5. after briefly reviewing the docs, I decided to download & install phplist. It's not in the repos..
[root@hetzner2 2018-08-20]# date
Mon Aug 20 23:46:30 UTC 2018
[root@hetzner2 2018-08-20]# pwd
/var/tmp/phplist/2018-08-20
[root@hetzner2 2018-08-20]# wget https://iweb.dl.sourceforge.net/project/phplist/phplist/3.3.3/phplist-3.3.3.tgz
--2018-08-20 23:45:59--  https://iweb.dl.sourceforge.net/project/phplist/phplist/3.3.3/phplist-3.3.3.tgz
Resolving iweb.dl.sourceforge.net (iweb.dl.sourceforge.net)... 2607:f748:10:12::5f:2, 192.175.120.182
Connecting to iweb.dl.sourceforge.net (iweb.dl.sourceforge.net)|2607:f748:10:12::5f:2|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13754917 (13M) [application/octet-stream]
Saving to: ‘phplist-3.3.3.tgz’

100%[======================================================================================>] 13,754,917  7.03MB/s   in 1.9s   

2018-08-20 23:46:01 (7.03 MB/s) - ‘phplist-3.3.3.tgz’ saved [13754917/13754917]
[root@hetzner2 2018-08-20]# 
  1. I created the nginx vhost for phplist
  2. I created a database named 'phplist_db' and a cooresponding user 'phplist_user' for phplist & stored the creds in keepass
  3. I created 2x new dns entries for (microfactory,phplist).opensourceecology.org on cloudflare pointing to 138.201.84.243
  4. I mostly setup phplist, but then I realized we'll want it to run on port 4443 like munin with htaccess. Therefore, I need to resetup the nginx/varnish/httpd configs to be more like the munin config & generate the cert properly.

Sat Aug 18, 2018

  1. Marcin emailed me asking me how we can email users
    1. the link is here https://wiki.opensourceecology.org/wiki/Special:EmailUser
    2. I added this to the https://wiki.opensourceecology.org/wiki/Mediawiki#Special_Pages
  2. We got an update on my ticket requesting CODE (LibreOffice Online) to add the ability to draw lines & shapes! They said it is scheduled to be added, but couldn't provide an ETA. Good enough I suppose https://bugs.documentfoundation.org/show_bug.cgi?id=113386
  3. my old job requesting the inventory of glacier expired again, so I'll try again..
[root@hetzner2 ~]# aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}'
{
	"location": "/099400651767/vaults/deleteMeIn2020/jobs/c84MNJlg2CA0V1NYXezXhppBpCSxmK_UjcE6_eXthqNI7956x1AVafRrnfNFvvr07WWWyJK7_rLM5beM9tmCMbGFWuE_", 
	"jobId": "c84MNJlg2CA0V1NYXezXhppBpCSxmK_UjcE6_eXthqNI7956x1AVafRrnfNFvvr07WWWyJK7_rLM5beM9tmCMbGFWuE_"
}
[root@hetzner2 ~]# 

Fri Aug 10, 2018

  1. Catarina asked me about importing settings into the oshine theme's options. The page itself complains about max_execution_time being only 30, recommending it to be 90--but it also says that it may work even if it's less than 30. We have a very beefy server that's mostly idle. I asked her to just give it a try & let me know if she encounters any issues.

Wed Aug 08, 2018

  1. marcin mentioned that the site I made was d3d.opensourceecology.org, but it should have been 3dp.opensourceecology.org. whoops. At least, this is a good test of the changes I made to the install process. And--not to mention--wordpress came out with a new version the fucking day after I created d3d. So I'll just create a brand new site 3dp.opensourceecology.org
    1. first I created the dns entry 3dp to 138.201.84.243
    2. I created a new db password & stored it to keepass
    3. I updated the cert
[root@hetzner2 sites-enabled]# certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/www.opensourceecology.org/htdocs/ -d opensourceecology.org  -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org  -w /var/www/html/staging.opensourceecology.org/htdocs -d staging.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/d3d.opensourceecology.org/htdocs/ -d d3d.opensourceecology.org -w /var/www/html/3dp.opensourceecology.org/htdocs/ -d 3dp.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d munin.opensourceecology.org
...
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/opensourceecology.org/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/opensourceecology.org/privkey.pem
   Your cert will expire on 2018-11-06. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

[root@hetzner2 sites-enabled]# 
    1. I finished up by adding the cmota & marcin users, and I sent them both emails telling them that the new site is online.
  1. ...
  1. the job I kicked-off yesterday to get our inventory of the glacier vault (for confirming that the hetnzer1 backups were successfully uploaded to glacier) won't return its result!
[root@hetzner2 ~]# aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id 'YITmkK2nn2EuYLNLvQdHfGK0vubrFJmRIm2a9-DUfUhH4sPRo5Z7a31J2n7K94BIN_OhSeaIu3-AyfpDCUpauB8niYZ1' output.json

An error occurred (ResourceNotFoundException) when calling the GetJobOutput operation: The job ID was not found: YITmkK2nn2EuYLNLvQdHfGK0vubrFJmRIm2a9-DUfUhH4sPRo5Z7a31J2n7K94BIN_OhSeaIu3-AyfpDCUpauB8niYZ1
[root@hetzner2 ~]# 
  1. I fucking hate glacier. This job takes at least 4 hours to run, and they delete its output before I have a chance to grab it the next day?? I guess we'll have to pay them for another job run..
[root@hetzner2 ~]# aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}'
{
	"location": "/099400651767/vaults/deleteMeIn2020/jobs/DaK2Jug09moAt-kwWMdWBHxAztsMHSFS9dHJgHXJO4ym6HaAdZKUXAi6ZexFoQfzQilvaxAEgxXScMXhTq2ayLwSE1nF", 
	"jobId": "DaK2Jug09moAt-kwWMdWBHxAztsMHSFS9dHJgHXJO4ym6HaAdZKUXAi6ZexFoQfzQilvaxAEgxXScMXhTq2ayLwSE1nF"
}
[root@hetzner2 ~]# 
  1. ...
  1. ok, so the installation of the b2 tool for working with backblaze b2 over the cli horribly corrupted our python install because it requires pip, which breaks yum's python. In order to use b2, we need to use pip in a sandboxed virtual environment. First I install python-virtualenv using yum
[root@hetzner2 ~]# yum install python-virtualenv
...
Dependency Installed:
  python-devel.x86_64 0:2.7.5-69.el7_5                                                                                       

Complete!
[root@hetzner2 ~]# 
  1. now, as b2user, I create the virtualenv directory in their home folder
[b2user@hetzner2 ~]$ date
Wed Aug  8 22:46:01 UTC 2018
[b2user@hetzner2 ~]$ pwd
/home/b2user
[b2user@hetzner2 ~]$ mkdir virtualenv
[b2user@hetzner2 ~]$ cd virtualenv/
[b2user@hetzner2 virtualenv]$ virtualenv .
New python executable in /home/b2user/virtualenv/bin/python
Installing setuptools, pip, wheel...done.
[b2user@hetzner2 virtualenv]$ ls -lah
total 20K
drwxrwxr-x 5 b2user b2user 4.0K Aug  8 22:46 .
drwx------ 7 b2user b2user 4.0K Aug  8 22:46 ..
drwxrwxr-x 2 b2user b2user 4.0K Aug  8 22:46 bin
drwxrwxr-x 2 b2user b2user 4.0K Aug  8 22:46 include
drwxrwxr-x 3 b2user b2user 4.0K Aug  8 22:46 lib
lrwxrwxrwx 1 b2user b2user    3 Aug  8 22:46 lib64 -> lib
[b2user@hetzner2 virtualenv]$ 
  1. I tried the install again, but this time I used the binary in the virtualenv https://www.backblaze.com/b2/docs/quick_command_line.html
[b2user@hetzner2 B2_Command_Line_Tool]$ ~/virtualenv/bin/python setup.py install
...
Installed /home/b2user/virtualenv/lib/python2.7/site-packages/python_dateutil-2.7.3-py2.7.egg
Finished processing dependencies for b2==1.3.3
[b2user@hetzner2 B2_Command_Line_Tool]$ b2
-bash: /bin/b2: No such file or directory
[b2user@hetzner2 B2_Command_Line_Tool]$ ls
appveyor.yml  ci       LICENSE        README.md               requirements.txt   test
b2            contrib  Makefile       README.release.md       run-unit-tests.sh  test_b2_command_line.py
b2.egg-info   dist     MANIFEST.in    requirements-setup.txt  setup.cfg
build         doc      pre-commit.sh  requirements-test.txt   setup.py
[b2user@hetzner2 B2_Command_Line_Tool]$ 
  1. that worked!
[b2user@hetzner2 B2_Command_Line_Tool]$ ~/virtualenv/bin/b2 version
b2 command line tool, version 1.3.3
[b2user@hetzner2 B2_Command_Line_Tool]$ 
  1. I confirmed that certbot wasn't broken
[root@hetzner2 ~]# certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Found the following certs:
  Certificate Name: opensourceecology.org
	Domains: opensourceecology.org 3dp.opensourceecology.org awstats.opensourceecology.org d3d.opensourceecology.org fef.opensourceecology.org forum.opensourceecology.org munin.opensourceecology.org oswh.opensourceecology.org staging.opensourceecology.org wiki.opensourceecology.org www.opensourceecology.org
	Expiry Date: 2018-11-06 21:02:32+00:00 (VALID: 89 days)
	Certificate Path: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem
	Private Key Path: /etc/letsencrypt/live/opensourceecology.org/privkey.pem
  Certificate Name: openbuildinginstitute.org
	Domains: www.openbuildinginstitute.org awstats.openbuildinginstitute.org openbuildinginstitute.org seedhome.openbuildinginstitute.org
	Expiry Date: 2018-09-11 03:20:08+00:00 (VALID: 33 days)
	Certificate Path: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem
	Private Key Path: /etc/letsencrypt/live/openbuildinginstitute.org/privkey.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
[root@hetzner2 ~]# 
  1. I updated the path to the b2 binary from '/bin/b2' to be '/home/b2user/virtualenv/bin/b2'
  2. now I checked the files in our b3 bucket. there weren't any, but that could be because the lifecycle rules deleted all the old daily backups since I broke this b2 binary when I fixed certbot. I'll just have to wait a few days and months to checkup on this again.
[b2user@hetzner2 B2_Command_Line_Tool]$ ~/virtualenv/bin/b2 list-buckets
5605817c251dadb96e4d0118  allPrivate  ose-server-backups
[b2user@hetzner2 B2_Command_Line_Tool]$ ~/virtualenv/bin/b2 list-file-names ose-server-backups
{
  "files": [], 
  "nextFileName": null
}
[b2user@hetzner2 B2_Command_Line_Tool]$ ~/virtualenv/bin/b2 ls ose-server-backups
[b2user@hetzner2 B2_Command_Line_Tool]$ 

Tue Aug 07, 2018

  1. I checked the status of the glacier uploads for the hetzner1 backups, and it looks like all the encrypted archive files (ending in .gpg) have been deleted, indicating a successful upload
[root@hetzner2 sync]# date
Tue Aug  7 17:51:55 UTC 2018
[root@hetzner2 sync]# pwd
/var/tmp/deprecateHetzner1/sync
[root@hetzner2 sync]# ls -lah
total 5.2M
drwxr-xr-x  2 root root 4.0K Aug  1 00:33 .
drwx------ 10 root root 4.0K Jul 28 22:26 ..
-rw-r--r--  1 root root 810K Jul 28 22:37 hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2
-rw-r--r--  1 root root 4.0M Jul 28 23:31 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2
-rw-r--r--  1 root root 100K Jul 28 23:26 hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2
-rw-r--r--  1 root root 102K Jul 28 23:03 hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2
-rw-r--r--  1 root root 187K Jul 28 23:13 hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2
[root@hetzner2 sync]# 
  1. I kicked-off an inventory list; I'll check that tomrrow to confirm that all the archives are present
[root@hetzner2 sync]# aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}'{
	"location": "/099400651767/vaults/deleteMeIn2020/jobs/YITmkK2nn2EuYLNLvQdHfGK0vubrFJmRIm2a9-DUfUhH4sPRo5Z7a31J2n7K94BIN_OhSeaIu3-AyfpDCUpauB8niYZ1", 
	"jobId": "YITmkK2nn2EuYLNLvQdHfGK0vubrFJmRIm2a9-DUfUhH4sPRo5Z7a31J2n7K94BIN_OhSeaIu3-AyfpDCUpauB8niYZ1"
}
[root@hetzner2 sync]# 
  1. continued with the creation of d3d.opensourceecology.org https://wiki.opensourceecology.org/wiki/Wordpress#Create_New_Wordpress_Vhost
    1. added d3d.opensourceecology.org to certbot
[root@hetzner2 ~]# certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Found the following certs:
  Certificate Name: opensourceecology.org
	Domains: opensourceecology.org awstats.opensourceecology.org fef.opensourceecology.org forum.opensourceecology.org munin.opensourceecology.org oswh.opensourceecology.org staging.opensourceecology.org wiki.opensourceecology.org www.opensourceecology.org
	Expiry Date: 2018-10-30 20:36:34+00:00 (VALID: 84 days)
	Certificate Path: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem
	Private Key Path: /etc/letsencrypt/live/opensourceecology.org/privkey.pem
  Certificate Name: openbuildinginstitute.org
	Domains: www.openbuildinginstitute.org awstats.openbuildinginstitute.org openbuildinginstitute.org seedhome.openbuildinginstitute.org
	Expiry Date: 2018-09-11 03:20:08+00:00 (VALID: 34 days)
	Certificate Path: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem
	Private Key Path: /etc/letsencrypt/live/openbuildinginstitute.org/privkey.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
[root@hetzner2 ~]# 
[root@hetzner2 nginx]# certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/www.opensourceecology.org/htdocs/ -d opensourceecology.org  -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org  -w /var/www/html/staging.opensourceecology.org/htdocs -d staging.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/d3d.opensourceecology.org/htdocs/ -d d3d.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d munin.opensourceecology.org
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/opensourceecology.org/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/opensourceecology.org/privkey.pem
   Your cert will expire on 2018-11-05. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

[root@hetzner2 nginx]# /bin/chmod 0400 /etc/letsencrypt/archive/*/pri*
[root@hetzner2 nginx]# nginx -t && service nginx reload
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Redirecting to /bin/systemctl reload nginx.service
[root@hetzner2 nginx]# 
    1. I had some issues attempting to install the plugins with the wp tool
[root@hetzner2 nginx]# sudo -u wp -i wp --path=${docrootDir} plugin install google-authenticator --activate
PHP Warning:  ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44
PHP Warning:  file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/staging.opensourceecology.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/var/www/html/d3d.opensourceecology.org:/usr/share/cacti/:/etc/cacti/) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454
Warning: google-authenticator: An unexpected error occurred. Something may be wrong with WordPress.org or this server’s configuration. If you continue to have problems, please try the <a href="https://wordpress.org/support/">support forums</a>.
Activating 'google-authenticator'...
Warning: The 'google-authenticator' plugin could not be found.
Error: No plugins installed.
[root@hetzner2 nginx]# 
      1. even a search doesn't work
[root@hetzner2 nginx]# sudo -u wp -i wp --path=${docrootDir} plugin search google
PHP Warning:  ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44
PHP Warning:  file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/staging.opensourceecology.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/var/www/html/d3d.opensourceecology.org:/usr/share/cacti/:/etc/cacti/) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454
Error: An unexpected error occurred. Something may be wrong with WordPress.org or this server’s configuration. If you continue to have problems, please try the <a href="https://wordpress.org/support/">support forums</a>. Try again
[root@hetzner2 nginx]# 
      1. and attempting to search from an old site also fails. what changed?
[root@hetzner2 nginx]# sudo -u wp -i wp --path=/var/www/html/www.opensourceecology.org/htdocs/ plugin search google
PHP Warning:  ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 26
PHP Warning:  ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44
PHP Notice:  Undefined index: HTTP_HOST in /var/www/html/www.opensourceecology.org/htdocs/wp-content/plugins/vcaching/vcaching.php on line 194
PHP Warning:  file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/staging.opensourceecology.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/var/www/html/d3d.opensourceecology.org:/usr/share/cacti/:/etc/cacti/) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454
PHP Warning:  An unexpected error occurred. Something may be wrong with WordPress.org or this server’s configuration. If you continue to have problems, please try the <a href="https://wordpress.org/support/">support forums</a>. (WordPress could not establish a secure connection to WordPress.org. Please contact your server administrator.) in /var/www/html/www.opensourceecology.org/htdocs/wp-admin/includes/plugin-install.php on line 169
Error: An unexpected error occurred. Something may be wrong with WordPress.org or this server’s configuration. If you continue to have problems, please try the <a href="https://wordpress.org/support/">support forums</a>. Try again
[root@hetzner2 nginx]# 
      1. I suppose it's possible that the wordpress.org api endpoint is down, but how do I verify this? I found no status page on their site.
      2. enabling debugging was unhelpful
[root@hetzner2 nginx]# sudo -u wp -i wp --debug --path=/var/www/html/www.opensourceecology.org/htdocs/ plugin search google
Debug (bootstrap): Using default global config: /home/wp/.wp-cli/config.yml (0.108s)
Debug (bootstrap): No project config found (0.109s)
Debug (bootstrap): argv: /home/wp/bin/wp --debug --path=/var/www/html/www.opensourceecology.org/htdocs/ plugin search google (0.109s)
Debug (bootstrap): ABSPATH defined: /var/www/html/www.opensourceecology.org/htdocs/ (0.11s)
Debug (bootstrap): Begin WordPress load (0.111s)
Debug (bootstrap): wp-config.php path: /var/www/html/www.opensourceecology.org/wp-config.php (0.111s)
PHP Warning:  ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44
PHP Notice:  Undefined index: HTTP_HOST in /var/www/html/www.opensourceecology.org/htdocs/wp-content/plugins/vcaching/vcaching.php on line 194
Debug (bootstrap): Loaded WordPress (0.439s)
Debug (bootstrap): Running command: plugin search (0.441s)
PHP Warning:  file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/staging.opensourceecology.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/var/www/html/d3d.opensourceecology.org:/usr/share/cacti/:/etc/cacti/) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454
PHP Warning:  An unexpected error occurred. Something may be wrong with WordPress.org or this server’s configuration. If you continue to have problems, please try the <a href="https://wordpress.org/support/">support forums</a>. (WordPress could not establish a secure connection to WordPress.org. Please contact your server administrator.) in /var/www/html/www.opensourceecology.org/htdocs/wp-admin/includes/plugin-install.php on line 169
Error: An unexpected error occurred. Something may be wrong with WordPress.org or this server’s configuration. If you continue to have problems, please try the <a href="https://wordpress.org/support/">support forums</a>. Try again
[root@hetzner2 nginx]# 
      1. ah, so the issue was that we still had 'WP_HTTP_BLOCK_EXTERNAL' defined in wp-config.php. I updated the install instructions to include the commenting out of this line before the plugin installs & uncomment it at the end. https://wiki.opensourceecology.org/wiki/Wordpress#Create_New_Wordpress_Vhost
    1. hit another issue with an error: "Could not create directory"
[root@hetzner2 nginx]# sudo -u wp -i wp --debug --path=${docrootDir} theme update --all
Debug (bootstrap): Using default global config: /home/wp/.wp-cli/config.yml (0.108s)
Debug (bootstrap): No project config found (0.109s)
Debug (bootstrap): argv: /home/wp/bin/wp --debug --path=/var/www/html/d3d.opensourceecology.org/htdocs theme update --all (0.109s)
Debug (bootstrap): ABSPATH defined: /var/www/html/d3d.opensourceecology.org/htdocs/ (0.11s)
Debug (bootstrap): Begin WordPress load (0.111s)
Debug (bootstrap): wp-config.php path: /var/www/html/d3d.opensourceecology.org/wp-config.php (0.111s)
PHP Warning:  ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44
Debug (bootstrap): Loaded WordPress (0.266s)
Debug (bootstrap): Running command: theme update (0.268s)
PHP Warning:  file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/staging.opensourceecology.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/var/www/html/d3d.opensourceecology.org:/usr/share/cacti/:/etc/cacti/) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454
Enabling Maintenance mode...
Downloading update from https://downloads.wordpress.org/theme/twentyseventeen.1.7.zip...
Using cached file '/home/wp/.wp-cli/cache/theme/twentyseventeen-1.7.zip'...
Unpacking the update...
Warning: Could not create directory.
+-----------------+-------------+-------------+--------+
| name            | old_version | new_version | status |
+-----------------+-------------+-------------+--------+
| twentyseventeen | 1.6         | 1.7         | Error  |
+-----------------+-------------+-------------+--------+
Success: Theme already updated.
[root@hetzner2 nginx]# 
      1. so I already documented this & the solution, but didn't integrate it into the docs for creating a new wordpress vhost https://wiki.opensourceecology.org/wiki/Wordpress#Warning:_Could_not_create_directory.
      2. I updated the documentation to include the above step https://wiki.opensourceecology.org/wiki/Wordpress#Create_New_Wordpress_Vhost
      3. actually, that didn't fix it. It would be really nice if the debug version of the wp-cli tool actually listed what dir it failed to make, but gradually increasing permissions showed that it was attempting to create 'wp-content/upgrade/'
    1. I created users for Marcin & Catarina. They _should_ receive an email from wordpress, unless wp does something silly like attempt to connect directly to the email servers (which we'd block) rather than make a system call using mail().
    2. using the zip from themeforest that Catarina scp'd to the server, I unzipped the oshine theme v6.5 zip into the 'wp-content/themes/' dir, re-ran the permissions fix, and enabled the theme successfully in the wp wui. Then I gathered together all the zips of plugins included with the themeforest zip archive, and extracted them into the 'wp-content/plugins/' dir, re-ran the permissions fix, and enabled each of them in the wordpress wui.
    3. I sent an email to Catarina & Marcin emails about the site being up & asking them to check their email inbox for credentials
      1. Catarina confirmed that she received the email and was able to successfully login


Wed Aug 01, 2018

  1. Marcin responded to me about phplist. ose does prefer a free floss solution. I think I understand though, despite the cost, why most low-budget nonprofits choose mailchimp. Their analytics & a/b tests provide the development/marketing teams essential tools in increase visibility, clicks, and donations that justify the costs. So I'll proceed by researching phplist and its floss alternatives to see which provides the best analytics and a/b tests
    1. https://www.quora.com/What-are-the-cheaper-better-alternatives-to-MailChimp
    2. https://alternativeto.net/software/phplist/?license=opensource
    3. https://mailchimp.com/help/about-ab-testing-campaigns/
    4. https://mailchimp.com/features/ab-testing/
  1. Marcin asked me to build d3d.opensourceecology.org as my highest priority task. I said I could do this by mid-next-week, and I asked Catarina to buy the latest version of the theme she wants & send me the zip, which she said she would achieve within a few days.
    1. first step: dns. I created d3d.opensourceecology.org in our cloudflare account to point to 138.201.84.243
    2. cloudlfare wouldn't let me login without entering a 2fa token sent (insecurely) via email to our shared cloudflare@opensourceecology.org account
    3. gmail wouldn't let me login to the above email address for the same reason. They wanted me to enter a phone number for a 2fa token to be sent (insecurely) via sms
    4. this shit is fucking annoying. I went to the g suite as superuser to see if I could permanently disable these god damn bullshit 2fa requests from gmail. I found that Users -> Cloud Flare -> Security had a button to "Turn off for 10 mins" next to the "Login Challenge" section with the description "Turn off identity questions for 10 minutes after a suspicious attempt to sign in. Disabling login challenge will make the account less secure"
    5. we have a fucking 100-character, uniuqe password securely stored in a keepass. I understand what google is doing, but that's for people who reuse shitty passwords for many sites. We should be able to disable this fucking bug indefinitely on a per-account basis >:O
    6. finally, after the temp disable, I was able to login to the gmail, get the token for cloudlfare, login to cloudflare, and then add the dns record for 'd3d.opensourceecology.org'
  2. I generated a 70-character password for a new d3d_user mysql user and added it to keepass
  3. I went to create the cert, but certbot failed!
[root@hetzner2 htdocs]# certbot certificates
Traceback (most recent call last):
  File "/bin/certbot", line 9, in <module>
	load_entry_point('certbot==0.19.0', 'console_scripts', 'certbot')()
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 479, in load_entry_point
	return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2703, in load_entry_point
	return ep.load()
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2321, in load
	return self.resolve()
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 2327, in resolve
	module = import(self.module_name, fromlist=['name'], level=0)
  File "/usr/lib/python2.7/site-packages/certbot/main.py", line 19, in <module>
	from certbot import client
  File "/usr/lib/python2.7/site-packages/certbot/client.py", line 11, in <module>
	from acme import client as acme_client
  File "/usr/lib/python2.7/site-packages/acme/client.py", line 31, in <module>
	requests.packages.urllib3.contrib.pyopenssl.inject_into_urllib3()  # type: ignore
  File "/usr/lib/python2.7/site-packages/urllib3/contrib/pyopenssl.py", line 118, in inject_into_urllib3
	_validate_dependencies_met()
  File "/usr/lib/python2.7/site-packages/urllib3/contrib/pyopenssl.py", line 153, in _validate_dependencies_met
	raise ImportError("'pyOpenSSL' module missing required functionality. "
ImportError: 'pyOpenSSL' module missing required functionality. Try upgrading to v0.14 or newer.
[root@hetzner2 htdocs]# 
  1. I (foolishly) was playing with pip earlier to get b2 working. I think this is why. I fucking hate pip, and shouldn't have touched it on a production box. https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q3#Tue_Jul_17.2C_2018
    1. in effort to get b2 working, I installed pip from the yum repo, had pip update itself, installed setuptools>=20.2 from pip, and finally installed this long list via pip"six, python-dateutil, backports.functools-lru-cache, arrow, funcsigs, logfury, certifi, chardet, urllib3, idna, requests, tqdm, futures, b2"
    2. even ^ that failed (fucking pip) so I resorted to installing from git
  2. I removed the pip-installed certbot and related depends, then installed with yum
pip uninstall certbot cerbot-apache certbot-nginx requests six urllib3 acme
yum install python-requests python-six python-urllib3
  1. this got me a new error with acme
[root@hetzner2 htdocs]# certbot certificates
Traceback (most recent call last):
  File "/bin/certbot", line 5, in <module>
	from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3098, in <module>
	@_call_aside
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3082, in _call_aside
	f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3111, in _initialize_master_working_set
	working_set = WorkingSet._build_master()
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 573, in _build_master
	ws.require(requires)
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 891, in require
	needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 777, in resolve
	raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'acme>0.24.0' distribution was not found and is required by certbot
[root@hetzner2 htdocs]# 
  1. I'm just going to fucking uinstall pip and all its packages
[root@hetzner2 htdocs]# packages=$(pip list 2>&1 | tail -n+3 | head -n-2 | awk '{print $1}')
[root@hetzner2 htdocs]# echo $packages
arrow b2 backports.functools-lru-cache backports.ssl-match-hostname boto certbot certifi cffi chardet ConfigArgParse configobj cryptography decorator duplicity enum34 fasteners funcsigs future futures GnuPGInterface google-api-python-client httplib2 idna iniparse ipaddress IPy iso8601 javapackages josepy keyring lockfile logfury lxml mock ndg-httpsclient oauth2client paramiko parsedatetime perf pexpect pip ply policycoreutils-default-encoding psutil pyasn1 pyasn1-modules pycparser pycurl PyDrive pygobject pygpgme pyliblzma pyOpenSSL pyparsing pyRFC3339 python-augeas python-dateutil python-gflags python-linux-procfs python2-pythondialog pytz pyudev pyxattr PyYAML requests requests-toolbelt rsa schedutils seobject sepolicy setuptools slip slip.dbus SQLAlchemy tqdm uritemplate urlgrabber yum-metadata-parser zope.component zope.event zope.interface
[root@hetzner2 htdocs]# 
[root@hetzner2 htdocs]# for p in $packages; do pip uninstall $p; done
[root@hetzner2 htdocs]# yum remove python-pip
...
Removed:
  python2-pip.noarch 0:8.1.2-6.el7                                                                                                               

Complete!
[root@hetzner2 htdocs]# yum install certbot
...
Updated:
  certbot.noarch 0:0.26.1-1.el7                                                                                                                  

Dependency Updated:
  python2-certbot.noarch 0:0.26.1-1.el7                                                                                                          

Complete!
[root@hetzner2 htdocs]# 
  1. I still have errors, but a new one. Fucking pip!
[root@hetzner2 htdocs]# certbot certificates
Traceback (most recent call last):
  File "/bin/certbot", line 5, in <module>
	from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3098, in <module>
	@_call_aside
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3082, in _call_aside
	f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 3111, in _initialize_master_working_set
	working_set = WorkingSet._build_master()
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 573, in _build_master
	ws.require(requires)
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 891, in require
	needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 777, in resolve
	raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'parsedatetime>=1.3' distribution was not found and is required by certbot
[root@hetzner2 htdocs]# 
  1. I slowly began to install these packages back from yum as needed, but I hit an issue with urrlib3. A find shows another version as installed by aws cli
[root@hetzner2 htdocs]# find / -name urllib3
/root/.local/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3
/root/.local/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3
[root@hetzner2 htdocs]# mv /root/.local /root/.local.bak
[root@hetzner2 htdocs]#
  1. after install & move
ImportError: No module named 'requests.packages.urllib3'
[root@hetzner2 htdocs]# find / -name urllib3
/root/.local.bak/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3
/root/.local.bak/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3
/usr/lib/python2.7/site-packages/urllib3
[root@hetzner2 htdocs]# 
  1. here's all the packages; not sure why it can't fucking find urllib3
[root@hetzner2 htdocs]# ls /usr/lib/python2.7/site-packages/
acme                                                 GnuPGInterface.pyc                         python_augeas-0.5.0-py2.7.egg-info
acme-0.25.1-py2.7.egg-info                           GnuPGInterface.pyo                         python_dateutil-2.7.3.dist-info
ANSI.py                                              html                                       python_gflags-2.0-py2.7.egg-info
ANSI.pyc                                             http                                       python_linux_procfs-0.4.9-py2.7.egg-info
ANSI.pyo                                             idna                                       pytz
augeas.py                                            idna-2.4-py2.7.egg-info                    pytz-2016.10-py2.7.egg-info
augeas.pyc                                           iniparse                                   pyudev
augeas.pyo                                           iniparse-0.4-py2.7.egg-info                pyudev-0.15-py2.7.egg-info
backports                                            ipaddress-1.0.16-py2.7.egg-info            queue
backports.functools_lru_cache-1.2.1-py2.7.egg-info   ipaddress.py                               repoze
backports.functools_lru_cache-1.2.1-py2.7-nspkg.pth  ipaddress.pyc                              repoze.lru-0.4-py2.7.egg-info
builtins                                             ipaddress.pyo                              repoze.lru-0.4-py2.7-nspkg.pth
cached_property-1.3.0-py2.7.egg-info                 IPy-0.75-py2.7.egg-info                    reprlib
cached_property.py                                   IPy.py                                     requests
cached_property.pyc                                  IPy.pyc                                    requests-2.19.1-py2.7.egg
cached_property.pyo                                  IPy.pyo                                    requests-2.6.0-py2.7.egg-info
certbot                                              josepy                                     requests_toolbelt
certbot-0.26.1-py2.7.egg-info                        josepy-1.1.0-py2.7.egg-info                requests_toolbelt-0.8.0-py2.7.egg-info
chardet                                              jsonschema                                 rpmUtils
chardet-2.2.1-py2.7.egg-info                         jsonschema-2.5.1-py2.7.egg-info            rsa
ConfigArgParse-0.11.0-py2.7.egg-info                 libfuturize                                rsa-3.4.1-py2.7.egg-info
configargparse.py                                    libpasteurize                              screen.py
configargparse.pyc                                   List-1.3.0-py2.7.egg                       screen.pyc
configargparse.pyo                                   lockfile                                   screen.pyo
configobj-4.7.2-py2.7.egg-info                       lockfile-0.9.1-py2.7.egg-info              setuptools
configobj.py                                         _markupbase                                setuptools-40.0.0.dist-info
configobj.pyc                                        mock-1.0.1-py2.7.egg-info                  six-1.9.0-py2.7.egg-info
configobj.pyo                                        mock.py                                    six.py
copyreg                                              mock.pyc                                   six.pyc
dateutil                                             mock.pyo                                   six.pyo
dialog.py                                            ndg                                        slip
dialog.pyc                                           ndg_httpsclient-0.3.2-py2.7.egg-info       slip-0.4.0-py2.7.egg-info
dialog.pyo                                           ndg_httpsclient-0.3.2-py2.7-nspkg.pth      slip.dbus-0.4.0-py2.7.egg-info
docopt-0.6.2-py2.7.egg-info                          parsedatetime                              socketserver
docopt.py                                            parsedatetime-2.4-py2.7.egg-info           texttable-1.3.1-py2.7.egg-info
docopt.pyc                                           past                                       texttable.py
docopt.pyo                                           pexpect-2.3-py2.7.egg-info                 texttable.pyc
_dummy_thread                                        pexpect.py                                 texttable.pyo
easy-install.pth                                     pexpect.pyc                                _thread
easy_install.py                                      pexpect.pyo                                tkinter
easy_install.pyc                                     pkg_resources                              tqdm-4.23.4-py2.7.egg
enum                                                 ply                                        tuned
enum34-1.0.4-py2.7.egg-info                          ply-3.4-py2.7.egg-info                     uritemplate
fdpexpect.py                                         procfs                                     uritemplate-3.0.0-py2.7.egg-info
fdpexpect.pyc                                        pxssh.py                                   urlgrabber
fdpexpect.pyo                                        pxssh.pyc                                  urlgrabber-3.10-py2.7.egg-info
firewall                                             pxssh.pyo                                  urllib3
FSM.py                                               pyasn1                                     urllib3-1.10.2-py2.7.egg-info
FSM.pyc                                              pyasn1-0.1.9-py2.7.egg-info                validate.py
FSM.pyo                                              pyasn1_modules                             validate.pyc
future                                               pyasn1_modules-0.0.8-py2.7.egg-info        validate.pyo
future-0.16.0-py2.7.egg-info                         pycparser                                  winreg
gflags.py                                            pycparser-2.14-py2.7.egg-info              xmlrpc
gflags.pyc                                           pyparsing-1.5.6-py2.7.egg-info             yum
gflags.pyo                                           pyparsing.py                               zope
gflags_validators.py                                 pyparsing.pyc                              zope.component-4.1.0-py2.7.egg-info
gflags_validators.pyc                                pyparsing.pyo                              zope.component-4.1.0-py2.7-nspkg.pth
gflags_validators.pyo                                pyrfc3339                                  zope.event-4.0.3-py2.7.egg-info
GnuPGInterface-0.3.2-py2.7.egg-info                  pyRFC3339-1.0-py2.7.egg-info               zope.event-4.0.3-py2.7-nspkg.pth
GnuPGInterface.py                                    python2_pythondialog-3.3.0-py2.7.egg-info
[root@hetzner2 htdocs]# 
  1. I uninstalled a lot of packages, then did the `ls` again. I've seen some coorelation between urllib3 & requests, so maybe it's this lingering requests dir...
[root@hetzner2 htdocs]# yum remove python-parsedatetime python-mock python-josepy python-cryptography python-configargparse python-future python-six python-idna python-requests python-chardet python-requests-toolbelt python-urllib3 pyOpenSSL
...
[root@hetzner2 htdocs]# ls /usr/lib/python2.7/site-packages/
ANSI.py                                              GnuPGInterface-0.3.2-py2.7.egg-info        python_augeas-0.5.0-py2.7.egg-info
ANSI.pyc                                             GnuPGInterface.py                          python_dateutil-2.7.3.dist-info
ANSI.pyo                                             GnuPGInterface.pyc                         python_gflags-2.0-py2.7.egg-info
augeas.py                                            GnuPGInterface.pyo                         python_linux_procfs-0.4.9-py2.7.egg-info
augeas.pyc                                           iniparse                                   pytz
augeas.pyo                                           iniparse-0.4-py2.7.egg-info                pytz-2016.10-py2.7.egg-info
backports                                            ipaddress-1.0.16-py2.7.egg-info            pyudev
backports.functools_lru_cache-1.2.1-py2.7.egg-info   ipaddress.py                               pyudev-0.15-py2.7.egg-info
backports.functools_lru_cache-1.2.1-py2.7-nspkg.pth  ipaddress.pyc                              repoze
cached_property-1.3.0-py2.7.egg-info                 ipaddress.pyo                              repoze.lru-0.4-py2.7.egg-info
cached_property.py                                   IPy-0.75-py2.7.egg-info                    repoze.lru-0.4-py2.7-nspkg.pth
cached_property.pyc                                  IPy.py                                     requests-2.19.1-py2.7.egg
cached_property.pyo                                  IPy.pyc                                    rpmUtils
configobj-4.7.2-py2.7.egg-info                       IPy.pyo                                    rsa
configobj.py                                         jsonschema                                 rsa-3.4.1-py2.7.egg-info
configobj.pyc                                        jsonschema-2.5.1-py2.7.egg-info            screen.py
configobj.pyo                                        List-1.3.0-py2.7.egg                       screen.pyc
dateutil                                             lockfile                                   screen.pyo
dialog.py                                            lockfile-0.9.1-py2.7.egg-info              setuptools
dialog.pyc                                           pexpect-2.3-py2.7.egg-info                 setuptools-40.0.0.dist-info
dialog.pyo                                           pexpect.py                                 slip
docopt-0.6.2-py2.7.egg-info                          pexpect.pyc                                slip-0.4.0-py2.7.egg-info
docopt.py                                            pexpect.pyo                                slip.dbus-0.4.0-py2.7.egg-info
docopt.pyc                                           pkg_resources                              texttable-1.3.1-py2.7.egg-info
docopt.pyo                                           ply                                        texttable.py
easy-install.pth                                     ply-3.4-py2.7.egg-info                     texttable.pyc
easy_install.py                                      procfs                                     texttable.pyo
easy_install.pyc                                     pxssh.py                                   tqdm-4.23.4-py2.7.egg
enum                                                 pxssh.pyc                                  tuned
enum34-1.0.4-py2.7.egg-info                          pxssh.pyo                                  uritemplate
fdpexpect.py                                         pyasn1                                     uritemplate-3.0.0-py2.7.egg-info
fdpexpect.pyc                                        pyasn1-0.1.9-py2.7.egg-info                urlgrabber
fdpexpect.pyo                                        pyasn1_modules                             urlgrabber-3.10-py2.7.egg-info
firewall                                             pyasn1_modules-0.0.8-py2.7.egg-info        validate.py
FSM.py                                               pycparser                                  validate.pyc
FSM.pyc                                              pycparser-2.14-py2.7.egg-info              validate.pyo
FSM.pyo                                              pyparsing-1.5.6-py2.7.egg-info             yum
gflags.py                                            pyparsing.py                               zope
gflags.pyc                                           pyparsing.pyc                              zope.component-4.1.0-py2.7.egg-info
gflags.pyo                                           pyparsing.pyo                              zope.component-4.1.0-py2.7-nspkg.pth
gflags_validators.py                                 pyrfc3339                                  zope.event-4.0.3-py2.7.egg-info
gflags_validators.pyc                                pyRFC3339-1.0-py2.7.egg-info               zope.event-4.0.3-py2.7-nspkg.pth
gflags_validators.pyo                                python2_pythondialog-3.3.0-py2.7.egg-info
[root@hetzner2 htdocs]# find / -name requests
/root/.local.bak/lib/aws/lib/python2.7/site-packages/botocore/vendored/requests
/root/.local.bak/lib/aws/lib/python2.7/site-packages/pip/_vendor/requests
/usr/lib/python2.7/site-packages/requests-2.19.1-py2.7.egg/requests
[root@hetzner2 htdocs]# 
  1. fucking hell pip. I never should have played with the devil here. Pip is the god damn devil. Let's try to import it ourselves.
[maltfield@hetzner2 ~]$ date
Wed Aug  1 21:03:41 UTC 2018
[maltfield@hetzner2 ~]$ pwd
/home/maltfield
[maltfield@hetzner2 ~]$ which python
/usr/bin/python
[maltfield@hetzner2 ~]$ python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import math;
>>> import requests;
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/site-packages/requests/init.py", line 58, in <module>
	from . import utils
  File "/usr/lib/python2.7/site-packages/requests/utils.py", line 32, in <module>
	from .exceptions import InvalidURL
  File "/usr/lib/python2.7/site-packages/requests/exceptions.py", line 10, in <module>
	from .packages.urllib3.exceptions import HTTPError as BaseHTTPError
  File "/usr/lib/python2.7/site-packages/requests/packages/init.py", line 95, in load_module
	raise ImportError("No module named '%s'" % (name,))
ImportError: No module named 'requests.packages.urllib3'
  1. the actual error is in /usr/lib/python2.7/site-packages/requests/exceptions.py on line 10
from .packages.urllib3.exceptions import HTTPError as BaseHTTPError
  1. using `rpm -V`, I got a list of corrupted python packages
[root@hetzner2 ~]# for package in $(rpm -qa | grep -i python); do if  `rpm -V $package` ; then echo $package; fi; done
python-javapackages-3.4.1-11.el7.noarch
python2-iso8601-0.1.11-7.el7.noarch
python-httplib2-0.9.2-1.el7.noarch
python2-keyring-5.0-3.el7.noarch
python-backports-1.0-8.el7.x86_64
python-decorator-3.4.0-3.el7.noarch
python-lxml-3.2.1-4.el7.x86_64
python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch
[root@hetzner2 ~]# 
  1. I tried to remove & re-install these packages
[root@hetzner2 ~]# yum remove python-javapckages python2-iso8601 python-httplib2 python2-keyring python-backports python-decorator python-lxml python-backports-ssl_match_hostname
...
[root@hetzner2 ~]# yum install python-javapckages python2-iso8601 python-httplib2 python2-keyring python-backports python-decorator python-lxml python-backports-ssl_match_hostname certbot jitsi java-1.8.0-openjdk tuned
  1. finally, it works again! Lesson learned: never, ever use pip.
[root@hetzner2 ~]# certbot
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Certbot doesn't know how to automatically configure the web server on this system. However, it can still get a certificate for you. Please run "certbot certonly" to do so. You'll need to manually configure your web server to use the resulting certificate.
[root@hetzner2 ~]# certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Found the following certs:
  Certificate Name: opensourceecology.org
	Domains: opensourceecology.org awstats.opensourceecology.org fef.opensourceecology.org forum.opensourceecology.org munin.opensourceecology.org oswh.opensourceecology.org staging.opensourceecology.org wiki.opensourceecology.org www.opensourceecology.org
	Expiry Date: 2018-08-22 21:36:52+00:00 (VALID: 21 days)
	Certificate Path: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem
	Private Key Path: /etc/letsencrypt/live/opensourceecology.org/privkey.pem
  Certificate Name: openbuildinginstitute.org
	Domains: www.openbuildinginstitute.org awstats.openbuildinginstitute.org openbuildinginstitute.org seedhome.openbuildinginstitute.org
	Expiry Date: 2018-09-11 03:20:08+00:00 (VALID: 40 days)
	Certificate Path: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem
	Private Key Path: /etc/letsencrypt/live/openbuildinginstitute.org/privkey.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
[root@hetzner2 ~]# 
  1. undoing my changes during testing, I'll move the /root/.local dir back
[root@hetzner2 ~]# aws
-bash: aws: command not found
[root@hetzner2 ~]# mv .local.bak .local
[root@hetzner2 ~]# aws
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: too few arguments
[root@hetzner2 ~]# 
  1. all packages look good, and `certbot` still works!
[root@hetzner2 ~]# for package in $(rpm -qa | grep -i python); do if  `rpm -V $package` ; then echo $package; fi; done[root@hetzner2 ~]# certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Found the following certs:
  Certificate Name: opensourceecology.org
	Domains: opensourceecology.org awstats.opensourceecology.org fef.opensourceecology.org forum.opensourceecology.org munin.opensourceecology.org oswh.opensourceecology.org staging.opensourceecology.org wiki.opensourceecology.org www.opensourceecology.org
	Expiry Date: 2018-08-22 21:36:52+00:00 (VALID: 21 days)
	Certificate Path: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem
	Private Key Path: /etc/letsencrypt/live/opensourceecology.org/privkey.pem
  Certificate Name: openbuildinginstitute.org
	Domains: www.openbuildinginstitute.org awstats.openbuildinginstitute.org openbuildinginstitute.org seedhome.openbuildinginstitute.org
	Expiry Date: 2018-09-11 03:20:08+00:00 (VALID: 40 days)
	Certificate Path: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem
	Private Key Path: /etc/letsencrypt/live/openbuildinginstitute.org/privkey.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
[root@hetzner2 ~]# 
  1. b2 is no longer accessible
[root@hetzner2 ~]# su - b2user
Last login: Sat Jul 28 19:48:19 UTC 2018 on pts/105
[b2user@hetzner2 ~]$ b2
-bash: b2: command not found
[b2user@hetzner2 ~]$ 
  1. I went to install b2 for the b2user, but it complains about setuptools and tells me to use pip to get setuptools >=20.2. Fuck that!
[b2user@hetzner2 ~]$ mkdir sandbox
[b2user@hetzner2 ~]$ cd sandbox/
[b2user@hetzner2 sandbox]$ git clone https://github.com/Backblaze/B2_Command_Line_Tool.git
Cloning into 'B2_Command_Line_Tool'...
remote: Counting objects: 6010, done.
remote: Total 6010 (delta 0), reused 0 (delta 0), pack-reused 6009
Receiving objects: 100% (6010/6010), 1.49 MiB | 1.65 MiB/s, done.
Resolving deltas: 100% (4342/4342), done.
[b2user@hetzner2 sandbox]$ cd B2_Command_Line_Tool/
[b2user@hetzner2 B2_Command_Line_Tool]$ python setup.py install
setuptools 20.2 or later is required. To fix, try running: pip install "setuptools>=20.2"
[b2user@hetzner2 B2_Command_Line_Tool]$ 
  1. setuptools from yum is 0.9.8-7
[root@hetzner2 site-packages]# rpm -qa | grep setuptools
python-setuptools-0.9.8-7.el7.noarch
[root@hetzner2 site-packages]# 
  1. I'll have to fix b2 for the b2user. Perhaps I can figure out how to setup a separate env. I'll also have to finish setting up the d3d site, but at least certbot is fixed. That could have been nasty if our cert had expired with `certbot` broken; all the sites would become inaccessible after 90 days! Fuck pip. Fuck pip. Fuck pip. Stick to the distro package manager. Always stick to the distro package manager.
  2. I went ahead and ran the letsencrypt renew script--which had been failing before. It renewed the ose cert.
[root@hetzner2 site-packages]# /root/bin/letsencrypt/renew.sh 
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/opensourceecology.org.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert is due for renewal, auto-renewing...
Plugins selected: Authenticator webroot, Installer None
Starting new HTTPS connection (1): acme-v02.api.letsencrypt.org
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for awstats.opensourceecology.org
http-01 challenge for fef.opensourceecology.org
http-01 challenge for forum.opensourceecology.org
http-01 challenge for munin.opensourceecology.org
http-01 challenge for opensourceecology.org
http-01 challenge for oswh.opensourceecology.org
http-01 challenge for staging.opensourceecology.org
http-01 challenge for wiki.opensourceecology.org
http-01 challenge for www.opensourceecology.org
Waiting for verification...
Cleaning up challenges

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
new certificate deployed without reload, fullchain is
/etc/letsencrypt/live/opensourceecology.org/fullchain.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/openbuildinginstitute.org.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not yet due for renewal

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem expires on 2018-09-11 (skipped)
Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/opensourceecology.org/fullchain.pem (success)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Redirecting to /bin/systemctl reload nginx.service
[root@hetzner2 site-packages]# 
  1. I went to www.opensourceecology.org in my browser and confirmed that the cert listed was issued from today's date (2018-08-01)

Tue Jul 31, 2018

  1. Marcin asked me if we should accept ether. I said we should (if his hardware wallet supports ether), but that we should not think that ether is as a hedge against bitcoin failing.
  2. the glacier upload failed again for all 3x files. I threw this damn thing in a while loop; it will stop attempting to re-upload after the files are deleted by the same retry script
[root@hetzner2 ~]# while  grep -E 'gpg$'` ; do date; /root/bin/retryUploadToGlacier.sh; sleep 1; done
Tue Jul 31 23:37:45 UTC 2018
+ backupArchives='/var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_oseblog.tar.gpg /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osemain.tar.gpg'
...
  1. did some research into phplist vs mailchimp. Sent Marcin an email asking why he chose phplist, if that research is final, if I should continue this research, or if I should just plunge forward in installing/configuring phplist.
    1. mailchimp is probably the most popular option. It's free up to 2,000 subscribers and 12,000 emails per month https://mailchimp.com/pricing/
  1. ...
  1. going down my TODO items, there's CODE https://wiki.opensourceecology.org/wiki/Google_docs_alternative
    1. at last look, we couldn't use them because there was no support for hyperlinks in impress/powerpoint. They fixed this 4 months after I submitted a bug report requesting the feature.
    2. I made another request for drawing arrows/lines/shapes, but there's been no status updates here https://bugs.documentfoundation.org/show_bug.cgi?id=113386
      1. I added a comment asking for an ETA
  1. ...
  1. I sent Marcin a follow-up email asking if he had a chance to test Janus yet. This is blocking my Jitsi poc.
  1. ...
  1. the only other task I can think of is mantis as a bug tracker. The question is: does itr meet Marcin's requirements. Specifically, he wanted to have an "issue" track the progress of each machine in an automated fashion using templates. For example, we'd want an easy button to generate a "Development Template" such as this one https://docs.google.com/spreadsheets/d/1teVrReHnbS1xQFDJJhhmJX5_Lz2d5WkhJlq5yyeIfQw/edit#gid=1430144236
    1. it looks like this may already exist simply by using the "create clone" button on mantis https://www.mantisbt.org/bugs/view.php?id=8308

Mon Jul 30, 2018

  1. I checked on the status of the backups of hetzner1 being uploaded to glacier. Apparently the upload step failed.
[root@hetzner2 bin]# ./uploadToGlacier.sh
+ backupDirs='/var/tmp/deprecateHetzner1/microft /var/tmp/deprecateHetzner1/oseforum /var/tmp/deprecateHetzner1/osemain /var/tmp/deprecateHetzner1/osecivi /var/tmp/deprecateHetzner1/oseblog'
+ syncDir=/var/tmp/deprecateHetzner1/sync/
...
+ gpg --symmetric --cipher-algo aes --batch --passphrase-file /root/backups/ose-backups-cron.key /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar
+ rm /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar
+ /root/bin/glacier.py --region us-west-2 archive upload deleteMeIn2020 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg
Traceback (most recent call last):
  File "/root/bin/glacier.py", line 736, in <module>
	main()
  File "/root/bin/glacier.py", line 732, in main
	App().main()
  File "/root/bin/glacier.py", line 718, in main
	self.args.func()
  File "/root/bin/glacier.py", line 500, in archive_upload
	file_obj=self.args.file, description=name)
  File "/usr/lib/python2.7/site-packages/boto/glacier/vault.py", line 177, in create_archive_from_file
	writer.write(data)  
  File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 219, in write
	self.partitioner.write(data)
  File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 61, in write
	self._send_part()   
  File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 75, in _send_part
	self.send_fn(part)  
  File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 222, in _upload_part
	self.uploader.upload_part(self.next_part_index, part_data)
  File "/usr/lib/python2.7/site-packages/boto/glacier/writer.py", line 129, in upload_part
	content_range, part_data)
  File "/usr/lib/python2.7/site-packages/boto/glacier/layer1.py", line 1279, in upload_part
	response_headers=response_headers)
  File "/usr/lib/python2.7/site-packages/boto/glacier/layer1.py", line 119, in make_request
	raise UnexpectedHTTPResponseError(ok_responses, response)
boto.glacier.exceptions.UnexpectedHTTPResponseError: Expected 204, got (408, code=RequestTimeoutException, message=Request timed out.)
+  1 -eq 0 
  1. that last line above was a check to see if the upload was successful. I forgot about this issue (god damn it glacier). Anyway, I had coded this in-place so that successful upload attempts would have their file deleted; failed files would still exist. A file listing shows that 3 of the 5 uploads failed
[root@hetzner2 sync]# date
Mon Jul 30 19:43:19 UTC 2018
[root@hetzner2 sync]# pwd
/var/tmp/deprecateHetzner1/sync
[root@hetzner2 sync]# ls -lah
total 14G
drwxr-xr-x  2 root root 4.0K Jul 28 23:34 .
drwx------ 10 root root 4.0K Jul 28 22:26 ..
-rw-r--r--  1 root root 810K Jul 28 22:37 hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2
-rw-r--r--  1 root root 6.2G Jul 28 22:41 hetzner1_final_backup_before_hetzner1_deprecation_microft.tar.gpg
-rw-r--r--  1 root root 4.0M Jul 28 23:31 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2
-rw-r--r--  1 root root 4.4G Jul 28 23:34 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.tar.gpg
-rw-r--r--  1 root root 100K Jul 28 23:26 hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2
-rw-r--r--  1 root root 102K Jul 28 23:03 hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2
-rw-r--r--  1 root root 187K Jul 28 23:13 hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2
-rw-r--r--  1 root root 3.3G Jul 28 23:14 hetzner1_final_backup_before_hetzner1_deprecation_osemain.tar.gpg
[root@hetzner2 sync]# 
  1. I copied the script hetzner1:/home/marcin_ose/backups/retryUploadToGlacier.sh hetzner2:/root/bin/retryUploadToGlacier.sh
  2. I changed the archive list to just those 3 failed listed above.
  3. I executed 'retryUploadToGlacier.sh'. Hopefully it'll be finished by tomorrow.
  1. ...
  1. I successfully launched OSE Linux in an HVM Qube, but I left it idle for sometime & when I returned to it, it was entirely black & unresponsive :(
  1. ...
  1. the retry script for the hetzner1 uploads to glacier finished; all 3 failed again. I re-ran it.

Sat Jul 28, 2018

  1. Updated the existing backup.sh script to exclude '/home/b2user/sync', which is now defined as $b2StagingDir in '/root/backups/backup.settings'
  2. I began to wonder how we can throttle these uploads to b2. Previously, we were using rsync's 'bwlimit' argument to limit our backup process's upload to dreamhost at 3 MB/s. It doesn't appear to be an existing option for b2. Unfortunately, Googling for info about how to throttle backblaze uploads leads to client's claiming that backblaze was throttling their uploads on _their_ end (mostly for the unlimited home user's accounts--also denied by backblaze). Ironic, but we _do_ want to throttle, albeit on our end.
    1. looks like this was requested over a year ago (2016-12), but is unassigned. The stated workaround is to use trickle, it looks liked we'd want something like `trickle -s -u 3000 b2 upload-file --threads 1 ose-server-backups <sourceFilePath> <destFilePath>` https://github.com/Backblaze/B2_Command_Line_Tool/issues/310
  3. I installed trickle
[root@hetzner2 backblaze]# which trickle
/usr/bin/which: no trickle in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
[root@hetzner2 backblaze]# yum install trickle
Loaded plugins: fastestmirror, replace
Loading mirror speeds from cached hostfile
 * base: mirror.wiuwiu.de
 * epel: mirror.wiuwiu.de
 * extras: mirror.wiuwiu.de
 * updates: mirror.wiuwiu.de
 * webtatic: uk.repo.webtatic.com
Resolving Dependencies
--> Running transaction check
---> Package trickle.x86_64 0:1.07-19.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================================================================================================
 Package                                     Arch                                       Version                                          Repository                                Size
========================================================================================================================================================================================
Installing:
 trickle                                     x86_64                                     1.07-19.el7                                      epel                                      48 k

Transaction Summary
========================================================================================================================================================================================
Install  1 Package

Total download size: 48 k
Installed size: 103 k
Is this ok [y/d/N]: y
Downloading packages:
trickle-1.07-19.el7.x86_64.rpm                                                                                                                                   |  48 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : trickle-1.07-19.el7.x86_64                                                                                                                                           1/1 
  Verifying  : trickle-1.07-19.el7.x86_64                                                                                                                                           1/1 

Installed:
  trickle.x86_64 0:1.07-19.el7                                                                                                                                                          

Complete!
[root@hetzner2 backblaze]# 

[root@hetzner2 backblaze]# which trickle /bin/trickle [root@hetzner2 backblaze]#

  1. made many changes to backup2.sh, and here's the output of the first completed test run
[root@hetzner2 backups]# ./backup2.sh 
================================================================================
Beginning Backup Run on 20180728_202957
/bin/tar: Removing leading `/' from member names

real    0m0.902s
user    0m0.911s
sys     0m0.057s
/bin/tar: Removing leading `/' from member names
/root/backups/sync/hetzner2_20180728_202957/etc/
/root/backups/sync/hetzner2_20180728_202957/etc/etc.20180728_202957.tar.gz

real    0m0.010s
user    0m0.000s
sys     0m0.010s

real    0m0.494s
user    0m0.479s
sys     0m0.012s
/home/b2user/sync/hetzner2_20180728_202957.tar.gpg: 100%|| 11.2M/11.2M [00:05<00:00, 1.89MB/s]
URL by file name: https://f001.backblazeb2.com/file/ose-server-backups/hetzner2_20180728_202957.tar.gpg
URL by fileId: https://f001.backblazeb2.com/b2api/v1/b2_download_file_by_id?fileId=4_z5605817c251dadb96e4d0118_f114b5017aac233bd_d20180728_m203000_c001_v0001106_t0015
{
  "action": "upload", 
  "fileId": "4_z5605817c251dadb96e4d0118_f114b5017aac233bd_d20180728_m203000_c001_v0001106_t0015", 
  "fileName": "hetzner2_20180728_202957.tar.gpg", 
  "size": 11226387, 
  "uploadTimestamp": 1532809800000
}
[root@hetzner2 backups]# 
  1. next I logged into the backblaze wui, and I browsed the files in the bucket. Sure enough, I see the 11.2M file named '

hetzner2_20180728_202957.tar.gpg'

  1. this time I wanted to test restoring from the wui, so I downloaded this file (hetzner2_20180728_202957.tar.gpg) straight from the browser, then decrypted it on my laptop
root@ose:/home/user/tmp/backblaze/restore.2018-07# date
Sat Jul 28 16:49:26 EDT 2018
root@ose:/home/user/tmp/backblaze/restore.2018-07# pwd
/home/user/tmp/backblaze/restore.2018-07
root@ose:/home/user/tmp/backblaze/restore.2018-07# ls -lah
total 11M
drwxrwxr-x 2 root root 4.0K Jul 28 16:49 .
drwxrwxr-x 3 root root 4.0K Jul 28 16:49 ..
-rw-r--r-- 1 user user  11M Jul 28 16:47 hetzner2_20180728_202957.tar.gpg
root@ose:/home/user/tmp/backblaze/restore.2018-07# 
root@ose:/home/user/tmp/backblaze/restore.2018-07# gpg --output hetzner2_20180728_202957.tar --batch --passphrase-file /home/user/keepass/ose-backups-cron.key --decrypt hetzner2_20180728_202957.tar.gpg 
gpg: AES256 encrypted data
gpg: encrypted with 1 passphrase
root@ose:/home/user/tmp/backblaze/restore.2018-07# ls -lah
total 22M
drwxrwxr-x 2 root root 4.0K Jul 28 16:50 .
drwxrwxr-x 3 root root 4.0K Jul 28 16:49 ..
-rw-rw-r-- 1 root root  11M Jul 28 16:50 hetzner2_20180728_202957.tar
-rw-r--r-- 1 user user  11M Jul 28 16:47 hetzner2_20180728_202957.tar.gpg
root@ose:/home/user/tmp/backblaze/restore.2018-07# 
root@ose:/home/user/tmp/backblaze/restore.2018-07# tar -xvf hetzner2_20180728_202957.tar 
root/backups/sync/hetzner2_20180728_202957/etc/
root/backups/sync/hetzner2_20180728_202957/etc/etc.20180728_202957.tar.gz
root@ose:/home/user/tmp/backblaze/restore.2018-07# ls -lah
total 22M
drwxrwxr-x 3 root root 4.0K Jul 28 16:51 .
drwxrwxr-x 3 root root 4.0K Jul 28 16:49 ..
-rw-rw-r-- 1 root root  11M Jul 28 16:50 hetzner2_20180728_202957.tar
-rw-r--r-- 1 user user  11M Jul 28 16:47 hetzner2_20180728_202957.tar.gpg
drwxrwxr-x 3 root root 4.0K Jul 28 16:51 root
root@ose:/home/user/tmp/backblaze/restore.2018-07# 
root@ose:/home/user/tmp/backblaze/restore.2018-07# cd root/backups/sync/hetzner2_20180728_202957/etc/
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# ls -lah
total 11M
drwxr-xr-x 2 root root 4.0K Jul 28 16:29 .
drwxrwxr-x 3 root root 4.0K Jul 28 16:51 ..
-rw-r--r-- 1 root root  11M Jul 28 16:29 etc.20180728_202957.tar.gz
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# 
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# tar -xzvf etc.20180728_202957.tar.gz 
...
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# 
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# ls -lah
total 11M
drwxr-xr-x  3 root root 4.0K Jul 28 16:51 .
drwxrwxr-x  3 root root 4.0K Jul 28 16:51 ..
drwxrwxr-x 98 root root  12K Jul 28 16:53 etc
-rw-r--r--  1 root root  11M Jul 28 16:29 etc.20180728_202957.tar.gz
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# 
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# ls -lah etc
total 2.0M
drwxrwxr-x  98 root root  12K Jul 28 16:53 .
drwxr-xr-x   3 root root 4.0K Jul 28 16:51 ..
drwxr-xr-x   4 root root 4.0K Aug  3  2017 acpi
-rw-r--r--   1 root root   16 Dec 15  2015 adjtime
-rw-r--r--   1 root root 1.5K Jun  7  2013 aliases
-rw-r--r--   1 root root  12K Jul 10  2017 aliases.db
drwxr-xr-x   2 root root 4.0K Sep 22  2017 alternatives
-rw-------   1 root root  541 Aug  3  2017 anacrontab
-rw-r--r--   1 root root   55 Mar  1  2017 asound.conf
drwxr-x---   3 root root 4.0K Sep 22  2017 audisp
drwxr-x---   3 root root 4.0K Sep 22  2017 audit
drwxr-xr-x   2 root root 4.0K Feb 20 14:42 awstats
drwxr-xr-x   2 root root 4.0K Oct 13  2016 bacula
drwxr-xr-x   2 root root 4.0K Sep 22  2017 bash_completion.d
-rw-r--r--   1 root root 2.8K Nov  5  2016 bashrc
drwxr-xr-x   2 root root 4.0K Sep  6  2017 binfmt.d
drwxr-xr-x   2 root root 4.0K Mar 26 08:49 cacti
-rw-r--r--   1 root root   38 Aug 30  2017 centos-release
-rw-r--r--   1 root root   51 Aug 30  2017 centos-release-upstream
drwxr-xr-x   2 root root 4.0K Aug  4  2017 chkconfig.d
-rw-r--r--   1 root root 1.1K Jan 31  2017 chrony.conf
-rw-r-----   1 root  994   62 Dec 15  2015 chrony.keys
-rw-r-----   1 root  994  481 Jan 31  2017 chrony.keys.rpmnew
drwxr-xr-x   2 root root 4.0K Apr 12 05:19 cron.d
drwxr-xr-x   2 root root 4.0K Sep 22  2017 cron.daily
-rw-------   1 root root    0 Aug  3  2017 cron.deny
drwxr-xr-x   2 root root 4.0K Jan 24  2018 cron.hourly
drwxr-xr-x   2 root root 4.0K Jun  9  2014 cron.monthly
-rw-r--r--   1 root root  451 Jun  9  2014 crontab
drwxr-xr-x   2 root root 4.0K Jun  9  2014 cron.weekly
-rw-------   1 root root    0 Dec 15  2015 crypttab
-rw-r--r--   1 root root 1.6K Nov  5  2016 csh.cshrc
-rw-r--r--   1 root root  841 Jun  7  2013 csh.login
drwxr-xr-x   4 root root 4.0K Jul  9  2017 dbus-1
drwxr-xr-x   2 root root 4.0K Sep 22  2017 default
drwxr-xr-x   2 root root 4.0K Sep 22  2017 depmod.d
drwxr-x---   4 root root 4.0K Sep 22  2017 dhcp
-rw-r--r--   1 root root 5.0K Nov  4  2016 DIR_COLORS
-rw-r--r--   1 root root 5.6K Nov  4  2016 DIR_COLORS.256color
-rw-r--r--   1 root root 4.6K Nov  4  2016 DIR_COLORS.lightbgcolor
-rw-r--r--   1 root root 1.3K Aug  5  2017 dracut.conf
drwxr-xr-x   2 root root 4.0K Aug  5  2017 dracut.conf.d
drwxr-xr-x   2 root root 4.0K Jul 17 22:00 duplicity
-rw-r--r--   1 root root  112 Mar 16  2017 e2fsck.conf
-rw-r--r--   1 root root    0 Nov  5  2016 environment
-rw-r--r--   1 root root 1.3K Nov  5  2016 ethertypes
-rw-r--r--   1 root root    0 Jun  7  2013 exports
lrwxrwxrwx   1 root root   56 Apr 11  2016 favicon.png -> /usr/share/icons/hicolor/16x16/apps/fedora-logo-icon.png
-rw-r--r--   1 root root   70 Nov  5  2016 filesystems
drwxr-xr-x   3 root root 4.0K Sep 22  2017 fonts
-rw-r--r--   1 root root  226 May 31  2016 fstab
drwxr-xr-x   2 root root 4.0K Aug  2  2017 gcrypt
-rw-r--r--   1 root root  842 Nov  5  2016 GeoIP.conf
-rw-r--r--   1 root root  858 Nov  5  2016 GeoIP.conf.default
drwxr-xr-x   2 root root 4.0K Nov  5  2016 gnupg
-rw-r--r--   1 root root   94 Mar 24  2017 GREP_COLORS
drwxr-xr-x   4 root root 4.0K Jun  9  2014 groff
-rw-r--r--   1 root root 1.1K Jul 27 22:47 group
-rw-r--r--   1 root root 1.1K Jun  6 14:13 group-
lrwxrwxrwx   1 root root   22 Sep 22  2017 grub2.cfg -> ../boot/grub2/grub.cfg
drwx------   2 root root 4.0K Sep 22  2017 grub.d
----------   1 root root  897 Jul 27 22:47 gshadow
----------   1 root root  886 Mar  2 21:44 gshadow-
drwxr-xr-x   3 root root 4.0K Aug  3  2017 gss
drwxr-xr-x   2 root root 4.0K Oct 21  2017 hitch
-rw-r--r--   1 root root    9 Jun  7  2013 host.conf
-rw-r--r--   1 root root   31 Jul  9  2017 hostname
-rw-r--r--   1 root root  758 Oct  9  2017 hosts
-rw-r--r--   1 root root  370 Jun  7  2013 hosts.allow
-rw-r--r--   1 root root  479 Jul 28 16:29 hosts.deny
drwxr-xr-x   6 root root 4.0K Feb 13 23:05 httpd
-rw-r--r--   1 root root  27K Sep  4  2017 httpd.20170904.tar.gz
lrwxrwxrwx   1 root root   11 Sep 22  2017 init.d -> rc.d/init.d
-rw-r--r--   1 root root  511 Aug  3  2017 inittab
-rw-r--r--   1 root root  942 Jun  7  2013 inputrc
drwxr-xr-x   2 root root 4.0K Sep 22  2017 iproute2
-rw-r--r--   1 root root   23 Aug 30  2017 issue
-rw-r--r--   1 root root   22 Aug 30  2017 issue.net
drwxr-xr-x   3 root root 4.0K Dec  1  2016 java
drwxr-xr-x   2 root root 4.0K Nov 20  2015 jvm
drwxr-xr-x   2 root root 4.0K Nov 20  2015 jvm-commmon
-rw-r--r--   1 root root 7.1K Sep 22  2017 kdump.conf
drwxrwx---   2 root  993 4.0K Jul 17 19:25 keepass
-rw-r--r--   1 root root  590 Apr 28  2017 krb5.conf
drwxr-xr-x   2 root root 4.0K Aug  3  2017 krb5.conf.d
-rw-r--r--   1 root root  32K Jul 17 22:00 ld.so.cache
-rw-r--r--   1 root root   28 Feb 27  2013 ld.so.conf
drwxr-xr-x   2 root root 4.0K Oct  3  2017 ld.so.conf.d
drwxr-xr-x   9 root root 4.0K Jul 13 00:20 letsencrypt
-r--------   1 root root  39K Oct  8  2017 letsencrypt.20171028.tar.gz
-rw-r-----   1 root root  191 Apr 19  2017 libaudit.conf
drwxr-xr-x   5 root root 4.0K Aug  9  2017 libreport
-rw-r--r--   1 root root 2.4K Oct 12  2013 libuser.conf
-rw-r--r--   1 root root   19 Dec 15  2015 locale.conf
lrwxrwxrwx   1 root root   25 Jul  9  2017 localtime -> ../usr/share/zoneinfo/UTC
-rw-r--r--   1 root root 2.0K Nov  4  2016 login.defs
-rw-r--r--   1 root root  675 Jul  9  2017 logrotate.conf
drwxr-xr-x   2 root root 4.0K Apr 12 05:19 logrotate.d
drwxr-xr-x   4 root root 4.0K Nov  5  2016 logwatch
drwxr-xr-x   6 root root 4.0K Sep 22  2017 lvm
-r--r--r--   1 root root   33 May 31  2016 machine-id
-rw-r--r--   1 root root  111 Nov  5  2016 magic
-rw-r--r--   1 root root  272 May 14  2013 mailcap
-rw-r--r--   1 root root 2.0K Aug  3  2017 mail.rc
-rw-r--r--   1 root root 5.1K Aug  7  2017 makedumpfile.conf.sample
-rw-r--r--   1 root root 5.1K Jun  9  2014 man_db.conf
drwxr-xr-x   2 root root 4.0K Nov 20  2015 maven
-rw-r--r--   1 root root  287 May 31  2016 mdadm.conf
-rw-r--r--   1 root root  51K May 14  2013 mime.types
-rw-r--r--   1 root root  936 Aug  2  2017 mke2fs.conf
drwxr-xr-x   2 root root 4.0K Sep 22  2017 modprobe.d
drwxr-xr-x   2 root root 4.0K Sep  6  2017 modules-load.d
-rw-r--r--   1 root root    0 Jun  7  2013 motd
lrwxrwxrwx   1 root root   17 Dec 15  2015 mtab -> /proc/self/mounts
drwxr-xr-x   8 root root 4.0K Mar  3 08:41 munin
-rw-r--r--   1 root root  630 Aug 11  2017 my.cnf
drwxr-xr-x   2 root root 4.0K Sep 22  2017 my.cnf.d
-rw-r--r--   1 root root 8.7K Jun 10  2014 nanorc
drwxr-xr-x   3 root root 4.0K May  3  2017 NetworkManager
-rw-r--r--   1 root root   58 Aug  3  2017 networks
drwxr-xr-x   4 root root 4.0K Jun 18 15:24 nginx
-rw-r--r--   1 root root 1.7K Jul  9  2017 nsswitch.conf
-rw-r--r--   1 root root 1.7K May 31  2016 nsswitch.conf.bak
-rw-r--r--   1 root root 1.7K Aug  1  2017 nsswitch.conf.rpmnew
drwxr-xr-x   3 root root 4.0K Jul  9  2017 ntp
-rw-r--r--   1 root root 2.2K May 31  2016 ntp.conf
drwxr-xr-x   3 root root 4.0K Sep 22  2017 openldap
drwxr-xr-x   2 root root 4.0K Nov  5  2016 opt
-rw-r--r--   1 root root  393 Aug 30  2017 os-release
-rw-------   1 root root   89 Jul 10  2017 ossec-init.conf
drwxr-xr-x   2 root root 4.0K Sep 22  2017 pam.d
-rw-r--r--   1 root root 2.3K Jul 27 22:47 passwd
-rw-r--r--   1 root root 2.2K Mar  2 21:44 passwd-
drwxr-xr-x   2 root root 4.0K Jun 11  2017 pear
-rw-r--r--   1 root root 1.1K Sep 22  2017 pear.conf
drwxr-xr-x   2 root root 4.0K Mar 16 19:38 php.d
-rw-r--r--   1 root root  65K Mar 16 20:09 php.ini
-rw-r--r--   1 root root  64K Jul 11  2017 php.ini.20170711.bak
-rw-r--r--   1 root root  64K Jul 18  2017 php.ini.20170718.bak
-rw-r--r--   1 root root  65K Aug  2  2017 php.ini.20170802.hardened
-rw-r--r--   1 root root  64K Sep 17  2016 php.ini.rpmnew
drwxr-xr-x   2 root root 4.0K Mar 16 17:10 php-zts.d
drwxr-xr-x   3 root root 4.0K Aug  4  2017 pkcs11
drwxr-xr-x  11 root root 4.0K Sep 22  2017 pki
drwxr-xr-x   2 root root 4.0K Sep 22  2017 plymouth
drwxr-xr-x   5 root root 4.0K Nov  5  2016 pm
drwxr-xr-x   5 root root 4.0K Jul  9  2017 polkit-1
drwxr-xr-x   2 root root 4.0K Jun 10  2014 popt.d
drwxr-xr-x   2 root root 4.0K Jul 10 22:17 postfix
drwxr-xr-x   3 root root 4.0K Sep 22  2017 ppp
drwxr-xr-x   2 root root 4.0K Sep 22  2017 prelink.conf.d
-rw-r--r--   1 root root  233 Jun  7  2013 printcap
-rw-r--r--   1 root root 1.8K Nov  5  2016 profile
drwxr-xr-x   2 root root 4.0K May 24 23:49 profile.d
-rw-r--r--   1 root root 6.4K Jun  7  2013 protocols
drwxr-xr-x   2 root root 4.0K Sep 22  2017 python
lrwxrwxrwx   1 root root   10 Sep 22  2017 rc0.d -> rc.d/rc0.d
lrwxrwxrwx   1 root root   10 Sep 22  2017 rc1.d -> rc.d/rc1.d
lrwxrwxrwx   1 root root   10 Sep 22  2017 rc2.d -> rc.d/rc2.d
lrwxrwxrwx   1 root root   10 Sep 22  2017 rc3.d -> rc.d/rc3.d
lrwxrwxrwx   1 root root   10 Sep 22  2017 rc4.d -> rc.d/rc4.d
lrwxrwxrwx   1 root root   10 Sep 22  2017 rc5.d -> rc.d/rc5.d
lrwxrwxrwx   1 root root   10 Sep 22  2017 rc6.d -> rc.d/rc6.d
drwxr-xr-x  10 root root 4.0K Aug  4  2017 rc.d
lrwxrwxrwx   1 root root   13 Sep 22  2017 rc.local -> rc.d/rc.local
lrwxrwxrwx   1 root root   14 Sep 22  2017 redhat-release -> centos-release
-rw-r--r--   1 root root  245 May 31  2016 resolv.conf
-rw-r--r--   1 root root 1.6K Dec 24  2012 rpc
drwxr-xr-x   2 root root 4.0K Oct  3  2017 rpm
-rw-r--r--   1 root root  458 Aug  2  2017 rsyncd.conf
-rw-r--r--   1 root root 3.3K Jul 13  2017 rsyslog.conf
drwxr-xr-x   2 root root 4.0K Aug  6  2017 rsyslog.d
-rw-r--r--   1 root root  966 Aug  3  2017 rwtab
drwxr-xr-x   2 root root 4.0K Aug  3  2017 rwtab.d
drwxr-xr-x   2 root root 4.0K Aug  2  2017 sasl2
-rw-r--r--   1 root root 6.6K Feb 16  2016 screenrc
-rw-------   1 root root  221 Nov  5  2016 securetty
drwxr-xr-x   6 root root 4.0K Jul  9  2017 security
drwxr-xr-x   5 root root 4.0K Sep 22  2017 selinux
-rw-r--r--   1 root root 655K Jun  7  2013 services
-rw-r--r--   1 root root  216 Aug  4  2017 sestatus.conf
----------   1 root root 2.1K Jul 27 22:47 shadow
----------   1 root root 2.0K Jun 15 09:12 shadow-
----------   1 root root 1.5K Jul  9  2017 shadow.20170709.bak
-rw-r--r--   1 root root   76 Jun  7  2013 shells
drwxr-xr-x   2 root root 4.0K Sep 22  2017 skel
drwxr-xr-x   2 root root 4.0K Mar  2 15:58 snmp
drwxr-xr-x   3 root root 4.0K Jan 10  2018 ssh
drwxr-xr-x   2 root root 4.0K Sep 22  2017 ssl
-rw-r--r--   1 root root  212 Aug  3  2017 statetab
drwxr-xr-x   2 root root 4.0K Aug  3  2017 statetab.d
-rw-r--r--   1 root root    0 Nov  5  2016 subgid
-rw-r--r--   1 root root    0 Nov  5  2016 subuid
drwxr-xr-x   2 root root 4.0K Aug 23  2017 subversion
-rw-r-----   1 root root 1.8K Sep  6  2017 sudo.conf
-r--r-----   1 root root 4.6K Mar 29  2017 sudoers
drwxr-x---   2 root root 4.0K Sep  6  2017 sudoers.d
-r--r-----   1 root root 3.9K Sep  6  2017 sudoers.rpmnew
-rw-r-----   1 root root 3.2K Sep  6  2017 sudo-ldap.conf
drwxr-xr-x   7 root root 4.0K Mar  2 21:44 sysconfig
-rw-r--r--   1 root root  449 Aug  3  2017 sysctl.conf
drwxr-xr-x   2 root root 4.0K Sep 22  2017 sysctl.d
drwxr-xr-x   4 root root 4.0K Sep 22  2017 systemd
lrwxrwxrwx   1 root root   14 Sep 22  2017 system-release -> centos-release
-rw-r--r--   1 root root   23 Aug 30  2017 system-release-cpe
-rw-------   1   59   59 6.9K Aug  3  2017 tcsd.conf
drwxr-xr-x   2 root root 4.0K Sep  6  2017 terminfo
drwxr-xr-x   2 root root 4.0K Sep  6  2017 tmpfiles.d
-rw-r--r--   1 root root  199 Oct  6  2014 trickled.conf
-rw-r--r--   1 root root  750 Aug 23  2017 trusted-key.key
drwxr-xr-x   2 root root 4.0K Sep 22  2017 tuned
drwxr-xr-x   3 root root 4.0K Oct  9  2017 udev
drwxr-xr-x   5 root root 4.0K Apr 12 16:15 varnish
drwxr-xr-x   5 root root 4.0K Nov 19  2017 varnish.20171119.bak
-rw-r--r--   1 root root   37 Dec 15  2015 vconsole.conf
-rw-r--r--   1 root root 2.0K Aug  1  2017 vimrc
-rw-r--r--   1 root root 2.0K Aug  1  2017 virc
drwxr-xr-x   2 root root 4.0K Sep 22  2017 vsftpd
drwxr-xr-x 118 root root 4.0K Jul  9  2017 webmin
-rw-r--r--   1 root root 4.4K Aug  3  2017 wgetrc
-rw-r--r--   1 root root  382 Mar 29  2013 whois.conf
drwxr-xr-x   5 root root 4.0K Nov  5  2016 X11
drwxr-xr-x   4 root root 4.0K Nov  5  2016 xdg
drwxr-xr-x   2 root root 4.0K Nov  5  2016 xinetd.d
drwxr-xr-x   6 root root 4.0K Sep 22  2017 yum
-rw-r--r--   1 root root  970 Aug  5  2017 yum.conf
drwxr-xr-x   2 root root 4.0K Oct 27  2017 yum.repos.d
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# cat etc/hostname 
hetzner2.opensourceecology.org
root@ose:/home/user/tmp/backblaze/restore.2018-07/root/backups/sync/hetzner2_20180728_202957/etc# 
  1. there's quite a lot of nested dirs & tarballs, but it's all there, and it's all very intuitive to recover--much simpler than duplicity or glacier!
  2. I added some logic to the script to add a prefix to the archive file based on the current day.
    1. if the backup was done on January first, the prefix will be "yearly_"
    2. if it's the 1st of any month, it will be "monthly_"
    3. if it's the 1st day of the week (Monday), it will be "weekly_"
    4. otherwise it's "daily"
  3. now we can create lifecycle rules in backblaze, greatly simplifying management and reducing costs.
    1. I'll create no lifecycle rules for "yearly_*" files
    2. I'll keep "daily_*" files for 3 days
    3. I"ll keep "monthly_*" files for 1 year = 265 days
    4. I'll keep "weekly_" files for 1 month = 31 days
  4. I created the above lifecycle settings in the wui. It was easy, after logging in, just click on "buckets" on the left navigation panel, then click the "Lifecycle Settings" link on the corresponding bucket (ie: ose-server-backups). Then there's an easily dialog that opens. Click the "Use custom lifecycle rules" option, and "Add Lifecycle Rules", which will add another row to the list of rules. Each rule has 3 fields: "File Path (fileNamePrefix)", "Days Till Hide (daysFromUploadingToHiding)", and "Days Till Delete (daysFromHidingToDeleting)". I set both fields to the numbers as described above. I'm not sure if this means that daily files will be deleted in 3 or 6 days (the sum of the two?). I'll just check back later and see. Note that it wouldn't let me set either of these fields to "0".
  5. I updated the existing backup.sh script to call my b2 poc backup2.sh script at the end of it's run rather than having a separate cron job.
  6. I sent an email to Marcin asking him to add our billing information to our b2 account
# ...
  1. meanwhile, I confirmed that the inventory job queried to our aws glacier account finished
[root@hetzner2 ~]# aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id 'TtDuZgF5z0CTF43G3rJL1lDVufM_xvXEY0RfxntrS5t-JSHrV88r4aigmMJRv5OUAj0UbQaX-8Bk5eiBL_A3e0zMzs-t' output.json
{
	"status": 200, 
	"acceptRanges": "bytes", 
	"contentType": "application/json"
}
[root@hetzner2 ~]# 
  1. I opened the json, and it showed not changes from before! Indeed, checking the bash script, it looks like I had commented-out the actual upload-to-glacier lines, probably because my testing for osesurv kept adding redundant copies.
  2. I modified the uploadToGlacier.sh script to actually upload to glacier (!) and re-ran it.

Fri Jul 27, 2018

  1. emails
  2. the glacier.py script that I exeucted a couple days ago exited with an error when attepmpting to execute a query (to what appears to be a local db?)
[root@hetzner2 sync]# date
Wed Jul 25 22:31:02 UTC 2018
[root@hetzner2 sync]# glacier.py --region us-west-2 archive list deleteMeIn2020
[root@hetzner2 sync]# glacier.py --region us-west-2 vault --wait sync deleteMeIn2020
usage: glacier.py [-h] [--region REGION] {vault,archive,job} ...
glacier.py: error: unrecognized arguments: --wait
[root@hetzner2 sync]# glacier.py --region us-west-2 vault sync --wait deleteMeIn2020
Traceback (most recent call last):
  File "/root/bin/glacier.py", line 736, in <module>
	main()
  File "/root/bin/glacier.py", line 732, in main
	App().main()
  File "/root/bin/glacier.py", line 718, in main
	self.args.func()
  File "/root/bin/glacier.py", line 471, in vault_sync
	wait=self.args.wait)
  File "/root/bin/glacier.py", line 462, in _vault_sync
	self._vault_sync_reconcile(vault, job, fix=fix)
  File "/root/bin/glacier.py", line 437, in _vault_sync_reconcile
	fix=fix)
  File "/root/bin/glacier.py", line 259, in mark_seen_upstream
	key=self.key, vault=vault, id=id).one()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2395, in one
	ret = list(self)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2437, in iter
	self.session._autoflush()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1208, in _autoflush
	util.raise_from_cause(e)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
	reraise(type(exception), exception, tb=exc_tb)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1198, in _autoflush
	self.flush()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in flush
	self._flush(objects)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush
	transaction.rollback(_capture_exception=True)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in exit
	compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in _flush
	flush_context.execute()
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in execute
	rec.execute(self)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 526, in execute
	uow
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 65, in save_obj
	mapper, table, insert)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 570, in _emit_insert_statements
	execute(statement, multiparams)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute
	return meth(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 322, in _execute_on_connection
	return connection._execute_clauseelement(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement
	compiled_sql, distilled_params
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context
	context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1159, in _handle_dbapi_exception
	exc_info
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
   reraise(type(exception), exception, tb=exc_tb)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context
	context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute
	cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (IntegrityError) column id is not unique u'INSERT INTO archive (id, name, vault, "key", last_seen_upstream, created_here, deleted_here) VALUES (?, ?, ?, ?, ?, ?, ?)' (u'qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA', u'this is a metadata file showing the file and dir list contents of the archive of the same name', u'deleteMeIn2020', 'AKIAIWVTQ5TWWAGY5XBA', 1532298685, 1532571707.728233, None)
[root@hetzner2 sync]# timed out waiting for input: auto-logout
  1. unfortunately, I don't see the job id in that output. I could initiate a new one with `aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}'`
  2. but, there does appear to be a db (from the above data), so maybe I could just steal the job id from glacier.py's db? I found the db in /root/.cache/glacier-cli/db
[root@hetzner2 ~]# ls -lah /root/.cache/glacier-cli/db
-rw-r--r-- 1 root root 31K Jul 17 00:45 /root/.cache/glacier-cli/db
[root@hetzner2 ~]# sqlite3 /root/.cache/glacier-cli/db
SQLite version 3.7.17 2013-05-20 00:56:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .tables
archive
sqlite> .schema
CREATE TABLE archive (
		id VARCHAR NOT NULL, 
		name VARCHAR, 
		vault VARCHAR NOT NULL, 
		"key" VARCHAR NOT NULL, 
		last_seen_upstream INTEGER, 
		created_here INTEGER, 
		deleted_here INTEGER, 
		PRIMARY KEY (id)
);
sqlite> 
  1. so the db's archive list _does_ show the first round of backups that I pushed to glacier. This is the same file list that glacier.py refused to show as of recently
sqlite> select name from archive;
this is a metadata file showing the file and dir list contents of the archive of the same name
this is a metadata file showing the file and dir list contents of the archive of the same name
this is a metadata file showing the file and dir list contents of the archive of the same name
this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates
this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates
hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name
hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates
hetzner1_20171001-052001.fileList.txt.bz2.gpg
hetzner1_20171001-052001.tar.gpg
hetzner1_20171101-062001.fileList.txt.bz2.gpg
hetzner1_20171101-062001.tar.gpg
hetzner1_20171201-062001.fileList.txt.bz2.gpg
hetzner2_20170702-052001.fileList.txt.bz2.gpg
hetzner2_20170702-052001.tar.gpg
hetzner2_20170801-072001.fileList.txt.bz2.gpg
hetzner2_20170801-072001.tar.gpg
hetzner2_20170901-072001.fileList.txt.bz2.gpg
hetzner2_20170901-072001.tar.gpg
hetzner2_20171001-072001.fileList.txt.bz2.gpg
hetzner2_20171001-072001.tar.gpg
hetzner2_20171101-072001.fileList.txt.bz2.gpg
hetzner2_20171101-072001.tar.gpg
hetzner2_20171202-072001.fileList.txt.bz2.gpg
hetzner2_20171202-072001.tar.gpg
hetzner2_20180102-072001.fileList.txt.bz2.gpg
hetzner2_20180102-072001.tar.gpg
hetzner2_20180202-072001.fileList.txt.bz2.gpg
hetzner2_20180302-072001.fileList.txt.bz2.gpg
hetzner2_20180401-072001.fileList.txt.bz2.gpg
hetzner2_20180401-072001.tar.gpg
hetzner1_20170701-052001.fileList.txt.bz2.gpg
hetzner1_20170801-052001.fileList.txt.bz2.gpg
hetzner1_20180101-062001.fileList.txt.bz2.gpg
hetzner1_20180201-062001.fileList.txt.bz2.gpg
hetzner1_20180201-062001.tar.gpg
hetzner1_20180301-062002.fileList.txt.bz2.gpg
hetzner1_20180301-062002.tar.gpg
hetzner1_20180401-052001.fileList.txt.bz2.gpg
hetzner1_20180401-052001.tar.gpg
hetzner1_20170701-052001.tar.gpg
hetzner1_20171201-062001.fileList.txt.bz2.gpg
hetzner1_20171201-062001.tar.gpg
hetzner2_20180202-072001.tar.gpg
hetzner2_20180302-072001.tar.gpg
hetzner1_20170801-052001.tar.gpg
hetzner1_20180101-062001.tar.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg
hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg
sqlite> 
  1. unfortunately, there doesn't appear to be anything here listing the job ids. I'll just do this with the `aws` command, then query for the output after _another_ day of waiting. I'm really getting sick of glacier.
[root@hetzner2 ~]# aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}'
{
	"location": "/099400651767/vaults/deleteMeIn2020/jobs/TtDuZgF5z0CTF43G3rJL1lDVufM_xvXEY0RfxntrS5t-JSHrV88r4aigmMJRv5OUAj0UbQaX-8Bk5eiBL_A3e0zMzs-t", 
	"jobId": "TtDuZgF5z0CTF43G3rJL1lDVufM_xvXEY0RfxntrS5t-JSHrV88r4aigmMJRv5OUAj0UbQaX-8Bk5eiBL_A3e0zMzs-t"
}
[root@hetzner2 ~]# 
  1. ...
  1. meanwhile, I continue my poc to deprecate aws glacier with the cheaper & easier backblaze b2
  2. I attempted to do an `ls` of our 'ose-server-backups' bucket that I uploaded a file to last week, but I didn't get any results from the cli
[root@hetzner2 restore.2018-07]# duplicity list-current-files b2://${b2_accountid}:${b2_prikey}@ose-server-backups/
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 19 21:14:17 2018
Tue Jul 17 00:51:32 2018 .
[root@hetzner2 restore.2018-07]# 
  1. I tried logging into the wui, and I saw that it had 3 files:
    1. duplicity-full-signatures.20180719T211417Z.sigtar.gpg 403.0 bytes 07/19/2018 17:14
    2. duplicity-full.20180719T211417Z.manifest.gpg 266.0 bytes 07/19/2018 17:14
    3. duplicity-full.20180719T211417Z.vol1.difftar.gpg 7.9 KB 07/19/2018 17:14
  2. I got some intersting output from the 'collection-status' argument
[root@hetzner2 restore.2018-07]# duplicity collection-status b2://${b2_accountid}:${b2_prikey}@ose-server-backups/
Last full backup date: Thu Jul 19 21:14:17 2018
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /root/.cache/duplicity/c034c9db7d546e6993579391627690db

Found 0 secondary backup chains.

Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Thu Jul 19 21:14:17 2018
Chain end time: Thu Jul 19 21:14:17 2018
Number of contained backup sets: 1
Total number of contained volumes: 1
 Type of backup set:                            Time:      Num volumes:
				Full         Thu Jul 19 21:14:17 2018                 1
-------------------------
No orphaned or incomplete backup sets found.
[root@hetzner2 restore.2018-07]# 
  1. ugh, looking through my old logs, I discovered that I disclosed the existing b2 application id & key. So I changed it.
  2. I did some googling, but I couldn't find a way to list the file that I already uploaded with duplicity. So I tried a restore. For some reason it turned the dir where I wanted the file to be dropped into a file that's encrypted
[root@hetzner2 backblaze]# date
Sat Jul 28 01:49:41 UTC 2018
[root@hetzner2 backblaze]# pwd
/var/tmp/backblaze
[root@hetzner2 backblaze]# ls -lah
total 12K
drwxr-xr-x   3 root root 4.0K Jul 28 01:48 .
drwxrwxrwt. 54 root root 4.0K Jul 27 22:10 ..
drwxr-xr-x   2 root root 4.0K Jul 28 01:48 restore.2018-07
[root@hetzner2 backblaze]# ls -lah restore.2018-07/
total 8.0K
drwxr-xr-x 2 root root 4.0K Jul 28 01:48 .
drwxr-xr-x 3 root root 4.0K Jul 28 01:48 ..
[root@hetzner2 backblaze]# duplicity restore b2://${b2_accountid}:${b2_prikey}@ose-server-backups/ /var/tmp/backblaze/restore.2018-07/
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 19 21:14:17 2018
[root@hetzner2 backblaze]# ls -lah
total 16K
drwxr-xr-x   2 root root 4.0K Jul 28 01:50 .
drwxrwxrwt. 54 root root 4.0K Jul 27 22:10 ..
-rw-r--r--   1 root root 7.5K Jul 17 00:51 restore.2018-07
[root@hetzner2 backblaze]# ls -lah restore.2018-07 
-rw-r--r-- 1 root root 7.5K Jul 17 00:51 restore.2018-07
[root@hetzner2 backblaze]# 
[root@hetzner2 backblaze]# cat restore.2018-07 | gpg --list-packets
:symkey enc packet: version 4, cipher 7, s2k 3, hash 2
		salt 016b52b5adff8d98, count 20971520 (228)
gpg: AES encrypted data
gpg: cancelled by user
:encrypted data packet:
		length: 7624
		mdc_method: 2
gpg: encrypted with 1 passphrase
gpg: decryption failed: No secret key
[root@hetzner2 backblaze]# 
  1. because I don't trust duplicity yet, this file that I uploaded was actually encrypted. So duplcity should have encrypted our encrypted file. Trouble is--I don't know if duplicity already decrypted it. Actually, it's the same key for both, so I was able to decrypt it at least once
[root@hetzner2 backblaze]# gpg -o decrypted.txt --batch --passphrase-file /root/backups/ose-backups-cron.key --decrypt restore.2018-07 
gpg: AES encrypted data
gpg: encrypted with 1 passphrase
[root@hetzner2 backblaze]# 
  1. ok, so that actually worked.
[root@hetzner2 backblaze]# bzcat restore.2018-07 | head -n 5
bzcat: restore.2018-07 is not a bzip2 file.
[root@hetzner2 backblaze]# bzcat decrypted.txt | head -n 5
================================================================================
This file is metadata for the archive 'hetzner1_final_backup_before_hetzner1_deprecation_osesurv'. In it, we list all the files included in the compressed/encrypted archive (produced using 'ls -lahR /var/tmp/deprecateHetzner1/osesurv'), including the files within the tarballs within the archive (produced using 'find /var/tmp/deprecateHetzner1/osesurv -type f -exec tar -tvf '{}' \; ')

This archive was made as a backup of the files and databases that were previously used on hetnzer1 prior to migrating to hetzner2 in 2018. Before we cancelled our contract for hetzner1, this backup was made & put on glacier for long-term storage in-case we learned later that we missed some content on the migration. Form more information, please see the following link:
 * https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation
[root@hetzner2 backblaze]# 
  1. But what happens when I have uploaded more than just 1 file? Let me upload some more complex dir/file structures, then attempt to restore both.
[root@hetzner2 backblaze]# mkdir sync
[root@hetzner2 backblaze]# ls
decrypted.txt  restore.2018-07  sync
[root@hetzner2 backblaze]# cd sync
[root@hetzner2 sync]# mkdir one
[root@hetzner2 sync]# mkdir two
[root@hetzner2 sync]# mkdir two/three
[root@hetzner2 sync]# touch headFile.txt
[root@hetzner2 sync]# echo "file contents $RANDOM" > headFile.txt
[root@hetzner2 sync]# cat headFile.txt 
file contents 5787
[root@hetzner2 sync]# echo "file contents $RANDOM" > one/oneFile.txt
[root@hetzner2 sync]# echo "file contents $RANDOM" > two/twoFile.txt
[root@hetzner2 sync]# echo "file contents $RANDOM" > two/three/threeFile.txt
[root@hetzner2 sync]# ls -lahR
.:
total 20K
drwxr-xr-x 4 root root 4.0K Jul 28 01:56 .
drwxr-xr-x 3 root root 4.0K Jul 28 01:55 ..
-rw-r--r-- 1 root root   19 Jul 28 01:56 headFile.txt
drwxr-xr-x 2 root root 4.0K Jul 28 01:57 one
drwxr-xr-x 3 root root 4.0K Jul 28 01:57 two

./one:
total 12K
drwxr-xr-x 2 root root 4.0K Jul 28 01:57 .
drwxr-xr-x 4 root root 4.0K Jul 28 01:56 ..
-rw-r--r-- 1 root root   20 Jul 28 01:57 oneFile.txt

./two:
total 16K
drwxr-xr-x 3 root root 4.0K Jul 28 01:57 .
drwxr-xr-x 4 root root 4.0K Jul 28 01:56 ..
drwxr-xr-x 2 root root 4.0K Jul 28 01:57 three
-rw-r--r-- 1 root root   20 Jul 28 01:57 twoFile.txt

./two/three:
total 12K
drwxr-xr-x 2 root root 4.0K Jul 28 01:57 .
drwxr-xr-x 3 root root 4.0K Jul 28 01:57 ..
-rw-r--r-- 1 root root   18 Jul 28 01:57 threeFile.txt
[root@hetzner2 sync]# grep -ir 'file' *
headFile.txt:file contents 5787
one/oneFile.txt:file contents 12508
two/three/threeFile.txt:file contents 776
two/twoFile.txt:file contents 25544
[root@hetzner2 sync]# 
  1. now when I try to back that up, it complains that I changed the source. So I guess this makes sense. Duplicty is thinking different than I am. I want just a bucket full of files that I drop, but duplicity wants a directory that it can keep in sync. The question is: what is better for OSE? I think simpler is better. duplicity is nice, but I'm not sure the complexity is worth it. I'd rather have a bunch of gpg files just sitting there, accessible from the backblaze wui, downloadable, and decryptable following our documentation. Duplicity would also be better in terms of deltas, but what's it worth if nobody can recover our backups? Restoreability is the most important feature of a backup. A backup is not a backup if you can't restore from it!

[root@hetzner2 sync]# duplicity /var/tmp/backblaze/sync/ b2://${b2_accountid}:${b2_prikey}@ose-server-backups/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Thu Jul 19 21:14:17 2018 Fatal Error: Backup source directory has changed. Current directory: /var/tmp/backblaze/sync Previous directory: /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg

Aborting because you may have accidentally tried to backup two different data sets to the same remote location, or using the same archive directory. If this is not a mistake, use the --allow-source-mismatch switch to avoid seeing this message [root@hetzner2 sync]# </pre>

  1. alternatively, let's checkout how easy it is to list our existing files with the `b2` cli tool. much easier!
[maltfield@hetzner2 ~]$ b2 authorize-account ${b2_accountid} ${b2_prikey}
Using https://api.backblazeb2.com
[maltfield@hetzner2 ~]$ b2 list-buckets
5605817c251dadb96e4d0118  allPrivate  ose-server-backups
[maltfield@hetzner2 ~]$ b2 ls ose-server-backups
duplicity-full-signatures.20180719T211417Z.sigtar.gpg
duplicity-full.20180719T211417Z.manifest.gpg
duplicity-full.20180719T211417Z.vol1.difftar.gpg
[maltfield@hetzner2 ~]$ 
  1. and the upload (sync) works well too, but the 'keepDays' deleted (well, "hid" = probably the non-destructive pre-step prior to deletion) the duplicity files. I thought it only applied to the files I was uploading *now*, but apparently not. Note we can still create lifecycle rules in b2 that matches by filename prefix. So, for example, we can have "daily_*" files deleted after 3 days and "monthly_*" files deleted after 2 years (or never).
[maltfield@hetzner2 backblaze]$ date
Sat Jul 28 02:12:32 UTC 2018
[maltfield@hetzner2 backblaze]$ pwd
/var/tmp/backblaze
[maltfield@hetzner2 backblaze]$ ls -lah
total 16K
drwxr-xr-x   4 root      root 4.0K Jul 28 02:11 .
drwxrwxrwt. 54 root      root 4.0K Jul 27 22:10 ..
drwxr-xr-x   2 maltfield root 4.0K Jul 28 02:11 restore.2018-07
drwxr-xr-x   4 root      root 4.0K Jul 28 01:56 sync
[maltfield@hetzner2 backblaze]$ b2 sync --keepDays 1 sync/ b2://ose-server-backups/
hide   duplicity-full.20180719T211417Z.manifest.gpg                
hide   duplicity-full-signatures.20180719T211417Z.sigtar.gpg
upload two/twoFile.txt                                       
upload headFile.txt                                          
hide   duplicity-full.20180719T211417Z.vol1.difftar.gpg      
upload two/three/threeFile.txt                               
upload one/oneFile.txt                                       
[maltfield@hetzner2 backblaze]$ b2 ls ose-server-backups
headFile.txt
one/
two/
[maltfield@hetzner2 backblaze]$ 
  1. the restore worked too, though it crashed very ungracefully when I failed to enter the destination file name (assuming it would auto-fill if I gave it the dest dir)
[maltfield@hetzner2 backblaze]$ ls -lah restore.2018-07/
total 8.0K
drwxr-xr-x 2 maltfield root 4.0K Jul 28 02:11 .
drwxr-xr-x 4 root      root 4.0K Jul 28 02:11 ..
[maltfield@hetzner2 backblaze]$ b2 download-file-by-name ose-server-backups two/three/threeFile.txt restore.2018-07/
Traceback (most recent call last):
  File "/usr/bin/b2", line 11, in <module>
	load_entry_point('b2==1.2.1', 'console_scripts', 'b2')()
  File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 1257, in main
  File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 1138, in run_command
  File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 398, in run
  File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper
	return function(*wrapee_args, **wrapee_kwargs)
  File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 166, in download_file_by_name
  File "build/bdist.linux-x86_64/egg/b2/session.py", line 38, in wrapper
  File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 210, in download_file_by_name
  File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 267, in _download_file_from_url
  File "/usr/lib64/python2.7/contextlib.py", line 17, in enter
	return self.gen.next()
  File "build/bdist.linux-x86_64/egg/b2/download_dest.py", line 186, in write_file_and_report_progress_context
  File "/usr/lib64/python2.7/contextlib.py", line 17, in enter
	return self.gen.next()
  File "build/bdist.linux-x86_64/egg/b2/download_dest.py", line 92, in write_to_local_file_context
IOError: [Errno 21] Is a directory: u'restore.2018-07/'
[maltfield@hetzner2 backblaze]$ b2 download-file-by-name ose-server-backups two/three/threeFile.txt restore.2018-07/threeFile.txt
restore.2018-07/threeFile.txt: 100%|| 18.0/18.0 [00:00<00:00, 18.2kB/s]
File name:    two/three/threeFile.txt
File id:      4_z5605817c251dadb96e4d0118_f10855cfa9b477620_d20180728_m021554_c001_v0001011_t0039
File size:    18
Content type: text/plain
Content sha1: d5c679a00e36769233a138252c2193ccb4ac5ffb
INFO src_last_modified_millis: 1532743040929
checksum matches
[maltfield@hetzner2 backblaze]$ ls -lah restore.2018-07/
total 12K
drwxr-xr-x 2 maltfield root      4.0K Jul 28 02:22 .
drwxr-xr-x 4 root      root      4.0K Jul 28 02:11 ..
-rw-rw-r-- 1 maltfield maltfield   18 Jul 28 01:57 threeFile.txt
[maltfield@hetzner2 backblaze]$ cat restore.2018-07/threeFile.txt
file contents 776
[maltfield@hetzner2 backblaze]$ 
  1. the sync downward spat out a ton of errors, but I think it worked (we don't really need this functionality anyway)
[maltfield@hetzner2 backblaze]$ ls -lah restore.2018-07/
total 8.0K
drwxr-xr-x 2 maltfield root 4.0K Jul 28 02:24 .
drwxr-xr-x 4 root      root 4.0K Jul 28 02:11 ..
[maltfield@hetzner2 backblaze]$
[maltfield@hetzner2 backblaze]$
[maltfield@hetzner2 backblaze]$ b2 sync b2://ose-server-backups/ restore.2018-07/
ERROR:b2.sync.action:an exception occurred in a sync action
Traceback (most recent call last):
  File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run
	self.do_action(bucket, reporter)
  File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 143, in do_action
	bucket.download_file_by_name(self.b2_file_name, download_dest, SyncFileReporter(reporter))
  File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper
	return function(*wrapee_args, **wrapee_kwargs)
  File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 166, in download_file_by_name
	range_=range_,
  File "build/bdist.linux-x86_64/egg/b2/session.py", line 38, in wrapper
	return f(api_url, account_auth_token, *args, **kwargs)
  File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 210, in download_file_by_name
	url, account_auth_token_or_none, download_dest, range_=range_
  File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 236, in _download_file_from_url
	with self.b2_http.get_content(url, request_headers) as response:
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 337, in get_content
	response = _translate_and_retry(do_get, try_count, None)
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 119, in _translate_and_retry
	return _translate_errors(fcn, post_params)
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 55, in _translate_errors
	int(error['status']), error['code'], error['message'], post_params
FileNotPresent: File not present:
b2_download(duplicity-full-signatures.20180719T211417Z.sigtar.gpg, 4_z5605817c251dadb96e4d0118_f1198b00a71235379_d20180728_m021554_c001_v0001101_t0032, /var/tmp/backblaze/restore.2018-07/duplicity-full-signatures.20180719T211417Z.sigtar.gpg, 1532744154000): FileNotPresent() File not present:
ERROR:b2.sync.action:an exception occurred in a sync action
Traceback (most recent call last):
  File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run
	self.do_action(bucket, reporter)
  File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 143, in do_action
	bucket.download_file_by_name(self.b2_file_name, download_dest, SyncFileReporter(reporter))
  File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper
	return function(*wrapee_args, **wrapee_kwargs)
  File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 166, in download_file_by_name
	range_=range_,
  File "build/bdist.linux-x86_64/egg/b2/session.py", line 38, in wrapper
	return f(api_url, account_auth_token, *args, **kwargs)
  File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 210, in download_file_by_name
	url, account_auth_token_or_none, download_dest, range_=range_
  File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 236, in _download_file_from_url
	with self.b2_http.get_content(url, request_headers) as response:
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 337, in get_content
	response = _translate_and_retry(do_get, try_count, None)
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 119, in _translate_and_retry
	return _translate_errors(fcn, post_params)
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 55, in _translate_errors
	int(error['status']), error['code'], error['message'], post_params
FileNotPresent: File not present:
b2_download(duplicity-full.20180719T211417Z.vol1.difftar.gpg, 4_z5605817c251dadb96e4d0118_f10121eebe2964534_d20180728_m021554_c001_v0001109_t0027, /var/tmp/backblaze/restore.2018-07/duplicity-full.20180719T211417Z.vol1.difftar.gpg, 1532744154000): FileNotPresent() File not present:
dnload two/twoFile.txt
dnload one/oneFile.txt
ERROR:b2.sync.action:an exception occurred in a sync action/s
Traceback (most recent call last):
  File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 42, in run
	self.do_action(bucket, reporter)
  File "build/bdist.linux-x86_64/egg/b2/sync/action.py", line 143, in do_action
	bucket.download_file_by_name(self.b2_file_name, download_dest, SyncFileReporter(reporter))
  File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper
	return function(*wrapee_args, **wrapee_kwargs)
  File "build/bdist.linux-x86_64/egg/b2/bucket.py", line 166, in download_file_by_name
	range_=range_,
  File "build/bdist.linux-x86_64/egg/b2/session.py", line 38, in wrapper
	return f(api_url, account_auth_token, *args, **kwargs)
  File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 210, in download_file_by_name
	url, account_auth_token_or_none, download_dest, range_=range_
  File "build/bdist.linux-x86_64/egg/b2/raw_api.py", line 236, in _download_file_from_url
	with self.b2_http.get_content(url, request_headers) as response:
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 337, in get_content
	response = _translate_and_retry(do_get, try_count, None)
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 119, in _translate_and_retry
	return _translate_errors(fcn, post_params)
  File "build/bdist.linux-x86_64/egg/b2/b2http.py", line 55, in _translate_errors
	int(error['status']), error['code'], error['message'], post_params
FileNotPresent: File not present:
b2_download(duplicity-full.20180719T211417Z.manifest.gpg, 4_z5605817c251dadb96e4d0118_f10155985b34af486_d20180728_m021553_c001_v0001033_t0048, /var/tmp/backblaze/restore.2018-07/duplicity-full.20180719T211417Z.manifest.gpg, 1532744153000): FileNotPresent() File not present:
dnload headFile.txt
dnload two/three/threeFile.txt  
ERROR:b2.console_tool:ConsoleTool command error
Traceback (most recent call last):
  File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 1138, in run_command
	return command.run(args)
  File "build/bdist.linux-x86_64/egg/b2/console_tool.py", line 921, in run
	allow_empty_source=allow_empty_source
  File "/usr/lib/python2.7/site-packages/logfury/v0_1/trace_call.py", line 84, in wrapper
	return function(*wrapee_args, **wrapee_kwargs)
  File "build/bdist.linux-x86_64/egg/b2/sync/sync.py", line 220, in sync_folders
	raise CommandError('sync is incomplete')
CommandError: sync is incomplete
ERROR: sync is incomplete
[maltfield@hetzner2 backblaze]$ ls -lah restore.2018-07/
total 20K
drwxr-xr-x 4 maltfield root      4.0K Jul 28 02:26 .
drwxr-xr-x 4 root      root      4.0K Jul 28 02:11 ..
-rw-rw-r-- 1 maltfield maltfield   19 Jul 28 01:56 headFile.txt
drwxrwxr-x 2 maltfield maltfield 4.0K Jul 28 02:26 one
drwxrwxr-x 3 maltfield maltfield 4.0K Jul 28 02:26 two
[maltfield@hetzner2 backblaze]$ 
  1. ok, I'm going to continue with this poc by including it in our daily backup scripts. With b2, we have only 10G of free space. Our daily encrypted backup file is 15G, so I'll just test this with the files in /etc/
[root@hetzner2 backups]# du -sh /etc
41M     /etc
[root@hetzner2 backups]# du -sh /home
119M    /home
[root@hetzner2 backups]# du -sh /var/log
288M    /var/log
  1. I forked our backup.sh script to /root/backups/backup2.sh. I'll run both for some time (probably a few months to truly test the lifecycle rules). Once I've determined that it works ok, I'll have Marcin add our billing information to the backblaze account, then we can merge it into backup.sh and deprecate the use of dreamhost for our backups.
  2. I also prefer if the user executing this b2 cli (which clearly _is_ buggy) was running as an unprivileged user not root). So I created a new user called 'b2user' to run the `b2` binary.
[root@hetzner2 backblaze]# useradd b2user
[root@hetzner2 backblaze]# su - b2user
[b2user@hetzner2 ~]$ whoami
b2user
[b2user@hetzner2 ~]$ echo $HOME
/home/b2user
[b2user@hetzner2 ~]$ b2 ls ose-server-backups
ERROR: Missing account data: 'NoneType' object has no attribute 'getitem'  Use: b2 authorize-account
[b2user@hetzner2 ~]$ export b2_prikey='<obfuscated>'
[b2user@hetzner2 ~]$ export b2_accountid='<obfuscated>'
[b2user@hetzner2 ~]$ b2 authorize-account ${b2_accountid} ${b2_prikey}
Using https://api.backblazeb2.com
[b2user@hetzner2 ~]$ b2 ls ose-server-backups
headFile.txt
one/
two/
[b2user@hetzner2 ~]$ 
  1. this new unprivliged 'b2user' user can also be used by root
[b2user@hetzner2 ~]$ logout
[root@hetzner2 backblaze]# sudo -u b2user b2 ls ose-server-backups
headFile.txt
one/
two/
[root@hetzner2 backblaze]# 
  1. but the user can't access our 'sync' dir in root, even if we make them the owner
[root@hetzner2 backblaze]# ls -lah /root/backups/sync
total 16G
drwxr-xr-x 2 root root 4.0K Jul 27 07:42 .
drwx------ 5 root root 4.0K Jul 28 02:54 ..
-rw-r--r-- 1 root root  16G Jul 27 07:42 hetzner2_20180727_072001.tar.gpg
[root@hetzner2 backblaze]# chown b2user /root/backups/sync/hetzner2_20180727_072001.tar.gpg
[root@hetzner2 backblaze]# ls -lah /root/backups/sync
total 16G
drwxr-xr-x 2 root   root 4.0K Jul 27 07:42 .
drwx------ 5 root   root 4.0K Jul 28 02:54 ..
-rw-r--r-- 1 b2user root  16G Jul 27 07:42 hetzner2_20180727_072001.tar.gpg
[root@hetzner2 backblaze]# su - b2user
Last login: Sat Jul 28 02:53:45 UTC 2018 on pts/104
[b2user@hetzner2 ~]$ ls -lah /root/backups/sync/hetzner2_20180727_072001.tar.gpg
ls: cannot access /root/backups/sync/hetzner2_20180727_072001.tar.gpg: Permission denied
  1. symlinks won't help
[root@hetzner2 backblaze]# ln -s /root/backups/sync/hetzner2_20180727_072001.tar.gpg /home/b2user/
[root@hetzner2 backblaze]# chown b2user /home/b2user/hetzner2_20180727_072001.tar.gpg 
[root@hetzner2 backblaze]# chown -h b2user /home/b2user/hetzner2_20180727_072001.tar.gpg 
[root@hetzner2 backblaze]# ls -lah /home/b2user/
total 28K
drwx------   2 b2user b2user 4.0K Jul 28 02:56 .
drwxr-xr-x. 13 root   root   4.0K Jul 28 02:47 ..
-rw-------   1 b2user b2user 4.0K Jul 28 02:49 .b2_account_info
-rw-------   1 b2user b2user  462 Jul 28 02:59 .bash_history
-rw-r--r--   1 b2user b2user   18 Sep  6  2017 .bash_logout
-rw-r--r--   1 b2user b2user  193 Sep  6  2017 .bash_profile
-rw-r--r--   1 b2user b2user  231 Sep  6  2017 .bashrc
lrwxrwxrwx   1 b2user root     51 Jul 28 02:56 hetzner2_20180727_072001.tar.gpg -> /root/backups/sync/hetzner2_20180727_072001.tar.gpg
[root@hetzner2 backblaze]# su - b2user
Last login: Sat Jul 28 02:54:43 UTC 2018 on pts/104
[b2user@hetzner2 ~]$ ls
hetzner2_20180727_072001.tar.gpg
[b2user@hetzner2 ~]$ ls -lah hetzner2_20180727_072001.tar.gpg 
lrwxrwxrwx 1 root root 51 Jul 28 02:56 hetzner2_20180727_072001.tar.gpg -> /root/backups/sync/hetzner2_20180727_072001.tar.gpg
[b2user@hetzner2 ~]$ cat hetzner2_20180727_072001.tar.gpg 
cat: hetzner2_20180727_072001.tar.gpg: Permission denied
[b2user@hetzner2 ~]$ 
  1. I'll have to address this later. I want the b2 user to be able to access this file, but nothing else in /root. And I want to preserve the logic that prevents backups from containing old backups. Probably the best option is to move the 'sync' dir into '/home/b2user/sync' but I'll only copy files there *after* they've been encrypted. That way the 'b2user' user cannot access any of our super-sensitive data (ie: config files with passwords in /etc contained within our backups).

Wed Jul 25, 2018

  1. Marcin forwarded me a message from Harmon about him not seeing the updates to the page https://www.opensourceecology.org/ose-fellowship/
    1. I tried loading the page, and it was stale as Harmon saw it. I logged-in, and it changed to what Marcin saw. This is an issue with varnish cache clearing. We use a varnish plugin to trigger cache purges when a page/post is updated, but apparently it doesn't work. I don't have spare cycles to debug, and it's really not a huge issue since this is a mostly-read, write-infrequently website. And the cache clears itself after 24 hours anyway.
  2. I checked-up on the glacier.sh script's status, which was triggered last week to encrypt & upload the hetzner1 backups to aws glacier
    1. the staging 'sync' dir appears to have created the files & fileList files we wanted; next step is to check if they exist in glacier
[root@hetzner2 sync]# date
Wed Jul 25 22:21:59 UTC 2018
[root@hetzner2 sync]# pwd
/var/tmp/deprecateHetzner1/sync
[root@hetzner2 sync]# ls -lah
total 11M
drwxr-xr-x 2 root root 4.0K Jul 25 22:20 .
drwx------ 9 root root 4.0K Jul 17 00:15 ..
-rw-r--r-- 1 root root 810K Jul 19 20:40 hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2
-rw-r--r-- 1 root root 810K Jul 19 20:40 hetzner1_final_backup_before_hetzner1_deprecation_microft.fileList.txt.bz2.gpg
-rw-r--r-- 1 root root 4.0M Jul 19 20:55 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2
-rw-r--r-- 1 root root 4.0M Jul 19 20:55 hetzner1_final_backup_before_hetzner1_deprecation_oseblog.fileList.txt.bz2.gpg
-rw-r--r-- 1 root root 100K Jul 19 20:50 hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2
-rw-r--r-- 1 root root 100K Jul 19 20:50 hetzner1_final_backup_before_hetzner1_deprecation_osecivi.fileList.txt.bz2.gpg
-rw-r--r-- 1 root root 102K Jul 19 20:44 hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2
-rw-r--r-- 1 root root 102K Jul 19 20:44 hetzner1_final_backup_before_hetzner1_deprecation_oseforum.fileList.txt.bz2.gpg
-rw-r--r-- 1 root root 186K Jul 19 20:49 hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2
-rw-r--r-- 1 root root 187K Jul 19 20:49 hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2.gpg
-rw-r--r-- 1 root root 7.4K Jul 17 00:51 hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2
-rw-r--r-- 1 root root 7.5K Jul 17 00:51 hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg
[root@hetzner2 sync]# 
    1. here's a snippet of one of the metadata fileList files, which shows what the very large encrypted backup archive actually contains
[root@hetzner2 sync]# bzcat hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2 | head -n 50
================================================================================
This file is metadata for the archive 'hetzner1_final_backup_before_hetzner1_deprecation_osemain'. In it, we list all the files included in the compressed/encrypted archive (produced using 'ls -lahR /var/tmp/deprecateHetzner1/osemain'), including the files within the tarballs within the archive (produced using 'find /var/tmp/deprecateHetzner1/osemain -type f -exec tar -tvf '{}' \; ')

This archive was made as a backup of the files and databases that were previously used on hetnzer1 prior to migrating to hetzner2 in 2018. Before we cancelled our contract for hetzner1, this backup was made & put on glacier for long-term storage in-case we learned later that we missed some content on the migration. Form more information, please see the following link:
 * https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation


 - Michael Altfield <maltfield@opensourceecology.org>

 Note: this file was generated at 2018-07-19 20:44:36+00:00
================================================================================
#############################
# 'ls -lahR' output follows #
#############################
/var/tmp/deprecateHetzner1/osemain:
total 3.3G
drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 17:48 .
drwx------ 9 root      root      4.0K Jul 17 00:15 ..
-rw-r--r-- 1 maltfield maltfield 2.9G Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_home.tar.bz2
-rw-r--r-- 1 maltfield maltfield 1.2M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-openswh.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield 187K Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_fef.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield 157K Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osesurv.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield   14 Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_website.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield 203M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osewiki.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield 212M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_webroot.tar.bz2
================================================================================
############################
# tarball contents follows #
############################
drwxr-xr-x osemain/osemain   0 2017-06-21 21:27 usr/home/osemain/bin/
-rw-r--r-- osemain/osemain 429 2011-07-19 03:46 usr/home/osemain/bin/wiki-import.sh
lrwxrwxrwx osemain/osemain   0 2011-12-26 01:01 usr/home/osemain/bin/java -> ../jdk-6/bin/java
lrwxrwxrwx osemain/osemain   0 2017-06-21 21:27 usr/home/osemain/bin/cleanLocal.pl -> ../backups/cleanLocal.pl
-rwxr-xr-x osemain/osemain 1248055 2016-01-13 01:07 usr/home/osemain/bin/composer.phar
lrwxrwxrwx osemain/osemain       0 2017-06-21 14:21 usr/home/osemain/bin/backup.sh -> ../backups/backup.sh
lrwxrwxrwx osemain/osemain       0 2011-11-25 11:00 usr/home/osemain/bin/mbkp -> /usr/home/osemain/mbkp/mbkp
-rw-r--r-- osemain/osemain      75 2016-01-13 01:48 usr/home/osemain/composer.json
-rw-r--r-- osemain/osemain   32967 2016-01-13 01:48 usr/home/osemain/composer.lock
drwxr-xr-x osemain/osemain       0 2017-08-26 15:34 usr/home/osemain/cron/
-rw-r--r-- osemain/osemain     648 2011-07-19 13:28 usr/home/osemain/emails.txt
drwxr-xr-x osemain/osemain       0 2016-01-13 01:48 usr/home/osemain/extensions/
drwxr-xr-x osemain/osemain       0 2016-01-13 01:48 usr/home/osemain/extensions/Validator/
-rw-r--r-- osemain/osemain     755 2014-06-25 21:40 usr/home/osemain/extensions/Validator/phpunit.xml.dist
-rw-r--r-- osemain/osemain      62 2014-06-25 21:40 usr/home/osemain/extensions/Validator/.gitignore
-rw-r--r-- osemain/osemain     467 2014-06-25 21:40 usr/home/osemain/extensions/Validator/.travis.yml
-rw-r--r-- osemain/osemain   18575 2014-06-25 21:40 usr/home/osemain/extensions/Validator/COPYING
drwxr-xr-x osemain/osemain       0 2014-06-25 21:40 usr/home/osemain/extensions/Validator/src/
drwxr-xr-x osemain/osemain       0 2014-06-25 21:40 usr/home/osemain/extensions/Validator/src/legacy/
-rw-r--r-- osemain/osemain   14953 2014-06-25 21:40 usr/home/osemain/extensions/Validator/src/legacy/ParserHook.php
-rw-r--r-- osemain/osemain     222 2014-06-25 21:40 usr/home/osemain/extensions/Validator/src/legacy/README.md
[root@hetzner2 sync]# bzcat hetzner1_final_backup_before_hetzner1_deprecation_osemain.fileList.txt.bz2 | tail -n 20
-rw-r--r-- osemain/osemain      702 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/jsduck/external.js
-rw-r--r-- osemain/osemain     2613 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/jsduck/CustomTags.rb
-rw-r--r-- osemain/osemain     1988 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/checkBadRedirects.php
-rw-r--r-- osemain/osemain     3087 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/deleteOrphanedRevisions.php
-rw-r--r-- osemain/osemain     6072 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/populateImageSha1.php
drwxr-xr-x osemain/osemain        0 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/hiphop/
-rwxr-xr-x osemain/osemain      479 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/hiphop/run-server
-rw-r--r-- osemain/osemain      482 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/hiphop/server.conf
-rw-r--r-- osemain/osemain     3285 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/deleteOldRevisions.php
-rw-r--r-- osemain/osemain     2002 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/patchSql.php
-rw-r--r-- osemain/osemain     1901 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/dumpSisterSites.php
-rw-r--r-- osemain/osemain     1531 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/tidyUpBug37714.php
-rw-r--r-- osemain/osemain    10863 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/backup.inc
-rw-r--r-- osemain/osemain     5018 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/cleanupUploadStash.php
-rw-r--r-- osemain/osemain     3168 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/populateRecentChangesSource.php
-rw-r--r-- osemain/osemain     1868 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/deleteArchivedFiles.php
-rw-r--r-- osemain/osemain     2383 2015-03-31 18:45 usr/www/users/osemain/mediawiki-1.24.2.extra/maintenance/showSiteStats.php
-rw-r--r-- osemain/osemain      526 2015-06-19 19:51 usr/www/users/osemain/old.html
-rw-r--r-- osemain/osemain      883 2015-06-19 13:11 usr/www/users/osemain/oldu.html.done
================================================================================
[root@hetzner2 sync]# 
    1. I kicked-off a sync; tomorrow I'll be able to print this job's results from the `aws` tool
[root@hetzner2 sync]# date
Wed Jul 25 22:31:02 UTC 2018
[root@hetzner2 sync]# glacier.py --region us-west-2 archive list deleteMeIn2020
[root@hetzner2 sync]# glacier.py --region us-west-2 vault --wait sync deleteMeIn2020
usage: glacier.py [-h] [--region REGION] {vault,archive,job} ...
glacier.py: error: unrecognized arguments: --wait
[root@hetzner2 sync]# glacier.py --region us-west-2 vault sync --wait deleteMeIn2020
  1. ...
  1. Marcin hit a 500 issue when attempting to access a new user's resume at https://wiki.opensourceecology.org/index.php?title=Special:ConfirmAccounts/authors&file=5c659e12860088d40865bd5637dde1c8eeb61762.pdf :
    1. strange, I couldn't hit this. iptables logs (/var/log/kern.log) says it's dropping me, but `iptables-save` or `iptables -nL` doesn't show that I _should_ be being blocked

Mon Jul 23, 2018

  1. Chris got a new email & PGP key. I sent him Marcin & My PGP keys.
  2. Marcin mentioned that we need a new version of OSE Linux before the Boot Camp on Aug 25th. I asked Chris if he would have time (after his exams) to push a new version with the CNC Circuit Mill Toolchain + Persistence before then. He said he should be able to release it before Aug 17th.

Thr Jul 19, 2018

  1. interesting, osedev.org is down this morning
  2. hetzner shutdown our (hopefully now deprecated) hetzner1 server as requested. They said that they rebooted it into a "live-rescue" system so it could be re-enabled fast (the delay for them responding to email is like 12 hours, so I'm not sure what this buys us).
  3. I did an nmap scan & saw that ssh was still enabled, but using a different set of private keys. I sure hope they didn't do something stupid like leaving PermitRootLogin enabled with some shitty default password
user@ose:~$ nmap dedi978.your-server.de

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-19 15:16 EDT
Nmap scan report for dedi978.your-server.de (78.46.3.178)
Host is up (0.19s latency).
Not shown: 942 closed ports, 56 filtered ports
PORT    STATE SERVICE
22/tcp  open  ssh
222/tcp open  rsh-spx

Nmap done: 1 IP address (1 host up) scanned in 22.70 seconds
user@ose:~$ 

  1. I confirmed that all the old domains on hetzner1 are now inaccessible
    1. http://blog.opensourceecology.org.dedi978.your-server.de/
    2. http://civicrm.opensourceecology.org.dedi978.your-server.de/
    3. http://forum.opensourceecology.org.dedi978.your-server.de[[1]]
    4. http://test.opensourceecology.org.dedi978.your-server.de/

</pre>

  1. I updated our wiki article on the CHG to deprecate hetzner1 with the latest status updates https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation
  2. I went to check on the status of the inventory sync by glacier.py, and the job appears to still be pending!
[root@hetzner2 ~]# glacier.py --region us-west-2 job list
i/d 2018-07-18T22:54:13.824Z deleteMeIn2020 
[root@hetzner2 ~]# glacier.py --region us-west-2 vault list
deleteMeIn2020
[root@hetzner2 ~]# glacier.py --region us-west-2 archive list deleteMeIn2020
[root@hetzner2 ~]# date
Thu Jul 19 20:04:51 UTC 2018
[root@hetzner2 ~]# 
  1. I don't know why glacier.py is totally failing at displaying the inventory of our vault. Let's try the native `aws glacier` tool
[root@hetzner2 ~]# aws glacier describe-vault --account-id - --vault-name deleteMeIn2020
{
	"SizeInBytes": 290416046788, 
	"VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", 
	"LastInventoryDate": "2018-07-17T08:46:39.837Z", 
	"NumberOfArchives": 55, 
	"CreationDate": "2018-03-29T21:36:06.041Z", 
	"VaultName": "deleteMeIn2020"
}
[root@hetzner2 ~]# 
  1. already ^ that's better than glacier.py. I did a listing of the jobs & found that my inventory retrieval succeeded
[root@hetzner2 ~]# aws glacier list-jobs --account-id - --vault-name deleteMeIn2020
{
	"JobList": [
		{
			"CompletionDate": "2018-07-18T22:54:13.824Z", 
			"VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", 
			"InventoryRetrievalParameters": {
				"Format": "JSON"
			}, 
			"Completed": true, 
			"InventorySizeInBytes": 20532, 
			"JobId": "4JhNUasCqaK2UkHm2CNvrk2kEQeBgUJViT5iHJF6HimCZZIufNbCxTX3GH6lKpeVIDd4iuzzZ6Q8SvXoZo0B3shKLALb", 
			"Action": "InventoryRetrieval", 
			"CreationDate": "2018-07-18T19:09:42.415Z", 
			"StatusMessage": "Succeeded", 
			"StatusCode": "Succeeded"
		}
	]
}
[root@hetzner2 ~]# 
  1. I got the job's output in json
[root@hetzner2 ~]# aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id '4JhNUasCqaK2UkHm2CNvrk2kEQeBgUJViT5iHJF6HimCZZIufNbCxTX3GH6lKpeVIDd4iuzzZ6Q8SvXoZo0B3shKLALb' output.json
{
	"status": 200, 
	"acceptRanges": "bytes", 
	"contentType": "application/json"
}
[root@hetzner2 ~]# 
  1. the inventory is there! but it's not easy to read
[root@hetzner2 ~]# cat output.json 
{"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020","InventoryDate":"2018-07-17T08:46:39Z","ArchiveList":[{"ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:35:48Z","Size":380236,"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"},{"ArchiveId":"lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:50:36Z","Size":280709,"SHA256TreeHash":"3f79016e6157ff3e1c9c853337b7a3e7359a9183ae9b26f1d03c1d1c594e45ab"},{"ArchiveId":"fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:53:00Z","Size":280718,"SHA256TreeHash":"6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4"},{"ArchiveId":"zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:55:04Z","Size":1187682789,"SHA256TreeHash":"c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f"},{"ArchiveId":"Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:57:50Z","Size":877058000,"SHA256TreeHash":"fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b"},{"ArchiveId":"P9wIGNBbLaAoz7xGht6Y4k7j33nGgPmg0RQ4sesN2tImQLjFN1dtkooVGrBnQqbPt8YhgvwUXv8eO_N72KRjS3RrZQYvkGxAQ9uPcJ-zaDOG8kII7l4p7UzGfaroO63ZreHItIW4GA","ArchiveDescription":"hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name","CreationDate":"2018-03-31T22:46:18Z","Size":2299038,"SHA256TreeHash":"2e789c8c99f08d338f8c1c2440afd76c23f76124c3dbdd33cbfa9f46f5c6b2aa"},{"ArchiveId":"o-naX0m4kQde-2i-8JZbEESi7r8OlFjIoDjgbQSXT_zt9L_e7qOH3HQ1R7ViQC3i7M0lVLbODsGZm9w9HfI3tHYKb2R1T_WWBwMxFuC_OhYiPX8uepTvvBg2Mg6KysP9H3zNzwGSZw","ArchiveDescription":"hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates","CreationDate":"2018-03-31T23:47:51Z","Size":12009829896,"SHA256TreeHash":"022f088abcfadefe7df5ac770f45f315ddee708f2470133ebd027ce988e1a45d"},{"ArchiveId":"mxeiPukWr03RpfDr49IRdJUaJNjIWQM4gdz8S8k3-_1VetpneyWZbwEVKCB1uMTYpPy0L6HZgZP7vJ6b7gz1oeszMnlzZR0-W6Rgt4O0BZ_mwgtGHRKOH0SIpMJHRnePaq9SBR9gew","ArchiveDescription":"hetzner1_20171001-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-01T20:20:52Z","Size":2309259,"SHA256TreeHash":"2d2711cf7f20b52a22d97b9ebe7b6a0bd45a3211842be66ca445b83fbc7984e5"},{"ArchiveId":"TOZBeL9sYVRtzy7gsAC1d930vcOhEBaABsh1ejb6vvad_NVSLu_1v0UvWqwkkf7x_8CCu6_WxolooSClZMhQOA21J_0_HP9GxvPkUvdSOeqmHjuANbIS82IRBOjFT4zFUoZnPhcVUg","ArchiveDescription":"hetzner1_20171001-052001.tar.gpg","CreationDate":"2018-04-01T21:42:48Z","Size":12235848201,"SHA256TreeHash":"a9868fdd4015fedbee5fb7b555a07b08d02299607eb64e73da689efb6bbad1ed"},{"ArchiveId":"LdlFgzhEnxVsuGMZU4d2c_rfMTGM_3iCvLUZZSpGmmLArCQLs8HxjWLwfDDeKPKEarvSgXOVA-Evy4Ep5WAzESoofG5jdCidL5OispSfHElpPu-60xbmNvQt9neLGZrwa3C_iESGiw","ArchiveDescription":"hetzner1_20171101-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-02T18:52:49Z","Size":2325511,"SHA256TreeHash":"920247e75ab48e16f18e7c0528778ea95ac0b74ffb18cdb3a68c0538d3e701e4"},{"ArchiveId":"6GHR8GlRG4EIlkA7O_Ta6BAXN3BQ7HmP0V7TgOp6bOa4cxuIlbHkmCd3I2lUSNwfG1penWOibFvvDhzgcihdmUMtCLepT3rl6HtFR5Lv-ro5mIegCcWQJOUDT0FRfsb7e7IkAze02Q","ArchiveDescription":"hetzner1_20171101-062001.tar.gpg","CreationDate":"2018-04-02T20:18:50Z","Size":12385858738,"SHA256TreeHash":"24c67d8686565c9f2b8b3eeacf2b4a0ec756a9f2092f44b28b56d2074d4ad364"},{"ArchiveId":"lryfyQFE4NbtWg5Q6uTq8Qqyc-y9il9WYe7lHs8H2lzFSBADOJQmCIgp6FxrkiaCcwnSMIReJPWyWcR4UOnurxwONhw8fojEHQTTeOpkf6fgfWBAPP9P6GOZZ0v8d8Jz_-QFVaV6Bw","ArchiveDescription":"hetzner1_20171201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-02T20:56:23Z","Size":2332970,"SHA256TreeHash":"366449cb278e2010864432c0f74a4d43bceff2e6a7b2827761b63d5e7e737a01"},{"ArchiveId":"O19wuK1PL_Wwf59-fjQuVP2Con0LXLf5Mk9xQA3HDPw4y1ZdwjYdFzmhZdaMUtGX666YKKjJu823l2C6seOTLg1ZbXZVTqQjZTeZGkQdCSRQdxyo3pEPWE2Iqpgb61FCiIETdCANUQ","ArchiveDescription":"hetzner2_20170702-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T12:29:09Z","Size":2039060,"SHA256TreeHash":"24df13a4150ab6ae7472727769677395c5e162660aea8aa0748833ad85c83e7a"},{"ArchiveId":"6ShVCMDoqdhc4wg84L1bXaq3O2InX-qB9Q9NMRH-xJQ0_TSlIN5b3fysow9-_RuNYc2lK958NrwFiIEa7Q0bVaT9LaZQH8WtoTqnX3DN2xJhb4_KUdu6iUaDdJUoPfsSXtC7xvPb-w","ArchiveDescription":"hetzner2_20170702-052001.tar.gpg","CreationDate":"2018-04-04T15:52:53Z","Size":21323056209,"SHA256TreeHash":"55030f294360adf1ed86e6e437a03432eb990c6d9c3e6b4a944004ad88d678e8"},{"ArchiveId":"0M5MSxjrlWJiT0XrncbVBITR__anuTLeOhcq9XvqsX0Q1koa0K0bH-wrZOQO7YsqqPv5Te3AUXPOCzIO6F0g5DQ2tOZq8E_YHX0XmMGjnOfeHIV9m_5GiCQAi3PrUuWM3C4cApTs7A","ArchiveDescription":"hetzner2_20170801-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T15:54:20Z","Size":198754,"SHA256TreeHash":"978f15fec9eef2171bdddf3da501b436d3bc3c7e518f2e70c0a1fd40ccf20d2a"},{"ArchiveId":"fwR6U5jX2T9N4mc14YQNMoA52vXICj-vvgIvYyDO5Qcv-pNeuXarT4gpzIy-XjuuF4KXkp9BXD13AA3hsau9PfW0ypy874m7arznCaMZO8ajm3NIicawZMiHGEikWw82EGY0z4VDIQ","ArchiveDescription":"hetzner2_20170801-072001.tar.gpg","CreationDate":"2018-04-04T16:08:27Z","Size":1746085455,"SHA256TreeHash":"6f3c5ede57e86991d646e577760197a01344bf013fb17a966fd7e2440f4b1062"},{"ArchiveId":"EZG83EoQ65jxe4ye0-0qszEqRjLE3lAb2Vi7vZ2eYvj1bVJnTc5kvfWgTxl4_w2G1PPk4pn6g2dIsYXosWk3OqWNaWNcYEOHEkNREHycnTpcl0rBkWJoimt9fCKLJCF7FiGavWUMSw","ArchiveDescription":"hetzner2_20170901-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:09:29Z","Size":287980,"SHA256TreeHash":"b72f11bb747ddb47d88f0566122a1190caf569b2b999ed22b8a98c6194ac4a0e"},{"ArchiveId":"5xqn4AAJhxnOHLJMfkvGX3Qksj5BTiEyHURILglfH0TPh_GfvbZNHqzdYIW-8sMtJ8OQ1GnnFqAOpty5mMwOSEjaokWkrQhEZK9-q7FBKDXXglAlqQKEJpd2UcTQI47zBEmGRasm-A","ArchiveDescription":"hetzner2_20170901-072001.tar.gpg","CreationDate":"2018-04-04T16:27:43Z","Size":1800393587,"SHA256TreeHash":"87400a80fc668b03ad712eaf8f6096172b5fc0aaa81309cc390dd34cc3caecec"},{"ArchiveId":"3XL4MENpH6i2Dp6micFWmMR2-qim3D1LQGiyQHME_5_A5jAbepw7WDEJOS2m2gIudSXfCuyclHTqzZYEpr6RwTGIEmYGw1jQ-EDPWYzjGTGDJcwWZEiklTmhLgvezqfyeSnQsdQZtA","ArchiveDescription":"hetzner2_20171001-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:29:10Z","Size":662050,"SHA256TreeHash":"506877424ae5304ca0c635d98fb8d01ad9183ec46356882edf02d61e9c48da8d"},{"ArchiveId":"g8RFNrkxynpQ8Yt9y4KyJra09dhxd3fIJxDlaUeDYBe615j7XON8gAdHMAQVerPQ4VF10obcuHnp64-kJFMmkG722hrlp3QBKy262CD4CcSUTSk3m070Mz6q3xySkcPzqRyxDwjtYg","ArchiveDescription":"hetzner2_20171001-072001.tar.gpg","CreationDate":"2018-04-04T16:51:09Z","Size":2648387053,"SHA256TreeHash":"1bf72e58a8301796d4f0a357a3f08d771da53875df4696ca201e81d1e8f0d82b"},{"ArchiveId":"ktHLXVqR5UxOoXEO5uRNMrIq4Jf2XrA6VmLQ0qgirJUeCler9Zcej90Qyg9bHvhQJPreilT4jwuW08oy7rZD_jnjd_2rcdZ11Y5Zl3V25lSKdRPM-b21o21kaBEr_ihhlIxOmPqJXg","ArchiveDescription":"hetzner2_20171101-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T16:51:40Z","Size":280741,"SHA256TreeHash":"f227ecd619df1564f2bb835029864fad804461db5496f6825a76e4720e3098a7"},{"ArchiveId":"iUmKTuLdEX3By9oHoqPtJd4KpEQ_2xh5PKV4LPuwBDcXyZmtt4zfq96djdQar1HwYmIh64bXEGqP7kGc0hk0ZtWZc12TtFUL0zohEbKBYr2VFZCQHjmc461TMLskKsOiyd6HbuKUWg","ArchiveDescription":"hetzner2_20171101-072001.tar.gpg","CreationDate":"2018-04-04T16:59:35Z","Size":878943524,"SHA256TreeHash":"7cf75fb3c67f0539142708a4ff9c57fdf7fd380283552fe5104e23f9a0656787"},{"ArchiveId":"6gmWP3OdBIdlRuPIbNpJj8AiaR-2Y4FaPTneD6ZwZY2352Wfp6_1YNha4qvO1lapuITAhjdh-GzKY5ybgJag8O4eh8jjtBKuOg3nrjbABpeS7e6Djc-7PEiMKskaeldv5M52gHFUiA","ArchiveDescription":"hetzner2_20171202-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:00:09Z","Size":313055,"SHA256TreeHash":"cfac22e7a2b59e28fe13fb37567d292e5ee1e9c06da264573091e26a2640a161"},{"ArchiveId":"4Ti7ZVFaexAncEDgak5Evp97aQk45VLA6cix3OCEB1cuGM6akGq2pINO8bzUjhEV8nvpqLLqoa_MSxPWTFl4uQ8sPUCDqG0vayB8PhYHcyNES09BQR9cE2HlR7qfxMDl5Ue946jcCw","ArchiveDescription":"hetzner2_20171202-072001.tar.gpg","CreationDate":"2018-04-04T17:12:23Z","Size":1046884902,"SHA256TreeHash":"d1d98730e5bb5058ac96f825770e5e2dbdbccb9788eee81a8f3d5cb01005d4e5"},{"ArchiveId":"GSWslpTGXPiYW5-gJZ4aLrFQGfDiTifPcqsSbh8CZc6T4K8_udBkSrNV0GNZQB9eLoRrUC5cXYT06FSvZ8kltgM61VUVYOXvO0ox4jYH68_sjHnkUmimk8itpa34hBC_c0zS0ZFRLQ","ArchiveDescription":"hetzner2_20180102-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:13:04Z","Size":499163,"SHA256TreeHash":"dfbc8647e0c402d1059691322ba9f830d005eae75d38840792b5cb58cd848c76"},{"ArchiveId":"3nDMsn_-0igfg6ncwMqx3-UxQLi-ug6LEoBxqyZKsMhd83PPoJk1cqn6QFib2GeyIgJzfCZoTlwrpe9O0_GnrM7u_mUEOsiKTCXP0NadvULehNcUx-2lWQpyRrCiDg5fcBb-f7tY0g","ArchiveDescription":"hetzner2_20180102-072001.tar.gpg","CreationDate":"2018-04-04T17:22:57Z","Size":1150541914,"SHA256TreeHash":"9ca7fa55632234d3195567dc384aaf2882348cccb032e7a467291f953351f178"},{"ArchiveId":"CnSvT3qmkPPY7exbsftSC-Ci71aqjLjiL1eUa3hYo3OfVkF4s2SQ8n39rH5KaQwo3GTHeJZOVoBTW9vMEf2ufYKc9e_eVAfVcmG-bLgncRQrrV-DlE2hYglzdAalx3H5OXBY8jlD9Q","ArchiveDescription":"hetzner2_20180202-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T17:31:24Z","Size":2480097,"SHA256TreeHash":"ae746d626f04882c2767e9db7dd1ffd735b3e08bc6de5877b0b23174f14cc1ff"},{"ArchiveId":"WWIYVa-hqJzMS8q1UNNZIKfLx1V3w3lzqpCLWwflYBM7yRocX2CEyFA-aY2EKJt0hRLTshgLXE3L3Sni8bYabDLBrV2Gehgq9reRTRhn8cxoKks4f1NmZwCCTSs6L4bQuJnjjNvOKw","ArchiveDescription":"hetzner2_20180302-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T18:36:50Z","Size":3530165,"SHA256TreeHash":"52f24fe7626401804799efc0407b892b7a0188f8f235d183228c036a5544d434"},{"ArchiveId":"XQYjqYnyYbKQWIzc1BSWQpn2K8mIoPQQH-bnoje7dB3BGCbzTjbEATGYSV1qJMbeUhiT_b7lwDiZzW1ZEbHVCgMDrWxCswG3eTZxiFdSwym7rELpFh5eC7XQlxuHjHocLY2zbUhYvg","ArchiveDescription":"hetzner2_20180401-072001.fileList.txt.bz2.gpg","CreationDate":"2018-04-04T22:19:13Z","Size":1617586,"SHA256TreeHash":"21c578c4b99abab6eb37cb754dd36cdcc71481613bf0031886cca81cd87c8d6b"},{"ArchiveId":"kn9SKSliFV1eHh_ax1Z9rEWXR5ETF3bhdoy6IuyItI3w63rBgxaNVNk5AFJLpcR2muktNFmsSEp8QucM-B4tMdFD6PtE4K8xPJe_Cvhv3G4e2TPKn3d9HMD5Bx3XjTimGHK6rHnz0A","ArchiveDescription":"hetzner2_20180401-072001.tar.gpg","CreationDate":"2018-04-04T22:43:39Z","Size":2910497961,"SHA256TreeHash":"e82e8df61655c53a27f049af8c97df48702d5385788fb26a02d37125d102196a"},{"ArchiveId":"4-Rebjng1gztwjx1x5L0Z1uErelURR5vmCUGD3sEW6rBQRUHRjyEQWL22JAm6YPpCoBwIxzVDPyC2NvSofxx2InjmixAUoQsyy3zAgGoW0nSlqNQPfeF1hkRdOCyIDutfMTQ1keEQw","ArchiveDescription":"hetzner1_20170701-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T00:36:36Z","Size":2430229,"SHA256TreeHash":"e84e7ff3fb15b1c5cf96b3f71ee87f88c39798aea3a900d295114e19ffa0c29f"},{"ArchiveId":"OVSNJHSIy5f1WRnisLdZ9ElWY4AjdgZwFqk3vDISCtypn5AHVo7wDGOAL76SpF0XzAd-yLgD3fIzf7mvgR4maA_HCANBhIP7Sdvhi7MLMjLnXLoKoHuKayBok_VLNRFfT5XORaTemA","ArchiveDescription":"hetzner1_20170801-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T03:52:16Z","Size":2485018,"SHA256TreeHash":"27ee0c5d5f20b87ff9c820dac1e5f3e989ab4ba679e94a8034a98d718564a5cd"},{"ArchiveId":"N1TB1zWhwJq20nTRNcIzVIRL9ms1KnszY0C4XAKhfTgtuWaV1SFWlqaA0xb6NjbX6N3XDisuP0bke-I0G_8RbsFQ_PcRTwRZzNEbr4LOU4WFhLM86s-FjDwjdJHmgyttfMh_1K9RLQ","ArchiveDescription":"hetzner1_20180101-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T07:28:56Z","Size":2349744,"SHA256TreeHash":"943aa9704177a35cd45ae11b705d9e238c9e6af1c86bc6ebed46c0ae9deff97a"},{"ArchiveId":"wJyG1vWz9IcB8-mnLm9bY3do9KIsxNY9nQ8ClQaOALesN-k3R5GU11p7Q3sVeStelg9IzWvburDcVFdHmJIYHC9RuRbuSZbk_rQvxxrkhtDcviu4i9_hN4SnPHvV3i0hITuiEFGpkA","ArchiveDescription":"hetzner1_20180201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T10:07:11Z","Size":2414523,"SHA256TreeHash":"704e59a90b38e1470a7a647f21a9c0e2887b84d73af9cc451f1b0b1c554b5cb7"},{"ArchiveId":"hPtzfNk9SSUpI-_KihUEQOb89sbrK3tr0-3au-pe7al_e8qetM7uQEbNTH4_oWPqD2yajF79XPXxi4wkqAcQjoAN4IhnkPVb846wODKTpFXkRs9V8lz6nW0t_GdR2c9uYXf-xM_MpQ","ArchiveDescription":"hetzner1_20180201-062001.tar.gpg","CreationDate":"2018-04-05T13:47:38Z","Size":28576802304,"SHA256TreeHash":"dd538e88ce29080099ee59b34a7739885732e1bb6dfa28fe7fa336eb3b367f47"},{"ArchiveId":"osvrVQsHYSGCO30f0kO9aneACAA8h80KBmqfBMqDG3RioepW6ndLlNBcSvhfQ2nrcWBwLabIn4A7Rkr7sjbddViPo92viBh4lyZdyDwVcm6Pp1hQv-p2j0vldxYLWpyLDflQ8QRn4A","ArchiveDescription":"hetzner1_20180301-062002.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T15:05:32Z","Size":2436441,"SHA256TreeHash":"b3e6651391632d17ecc729255472cd5affaea7c2c65585a5d71d258199d6af48"},{"ArchiveId":"OtlG0WN4qd8kIg3xRQvRHoAzICwHRg6S3I8df5r_VRKaUNzJCsnwbO8Z9RiJPAAqqqVqg9I_GKhnt7txvEdUjx5s9hLywWm_OcRm5Lj_rJV_dupUwVlTG8HsdnCIwFseGa1JD5bviw","ArchiveDescription":"hetzner1_20180301-062002.tar.gpg","CreationDate":"2018-04-05T18:57:24Z","Size":29224931379,"SHA256TreeHash":"3a6b009477ffe453f5460ab691709ce0dcdf6e9ae807a43339b61f0e6c5785ab"},{"ArchiveId":"2PAyQClvhEMhO-TxdAvV9Qdqa_Lvh4webx9hHIXbVnQQHJxMlhWPikmVpr1zTQRgy23r-WcOouH6gLKQ7WBRSH5yM8q5f8gb0Z2anOAwdR4A9DtxqDIVtI78-7Bs3Bf2b0fYbPQCWw","ArchiveDescription":"hetzner1_20180401-052001.fileList.txt.bz2.gpg","CreationDate":"2018-04-05T19:31:28Z","Size":2231140,"SHA256TreeHash":"a8a2712abf9434fa38d9aa3eb52c69accffdb4a2abe79425c3057d537b014a47"},{"ArchiveId":"Gn7a5jzeimXwa3su0i02OAK2XFmK9faX2WZx77Zq_tOx6j7ihpFEnkuF97Dpo66NgF7M24orh50kMSphvzLex_NbP9tDNoOI8mYG0-7GzOmNSmw9NaZpMLGn9NAVKbxs0byJ3YkquA","ArchiveDescription":"hetzner1_20180401-052001.tar.gpg","CreationDate":"2018-04-05T21:05:59Z","Size":12475250059,"SHA256TreeHash":"e256db8915834ddc3c096f1f3b9e4315bb56857b962722fb4093270459ed1116"},{"ArchiveId":"UqxNCpEu1twmhb9qLPxpQXMBv6yLyR37rZ1T_1tQjdl8x0RwukdIoOEGcmpHwdtrJgTA2OrWZ3ZYTncxkXojwWAOROW-wJ4SJANFfxwvGfueFNUSn17qTggcqeE43I5P1xmlxb25wg","ArchiveDescription":"hetzner1_20170701-052001.tar.gpg","CreationDate":"2018-04-07T19:26:56Z","Size":40953093076,"SHA256TreeHash":"5bf1d49a70b4031cb56b303be5bfed3321758bdf9242c944d5477eb4f3a15801"},{"ArchiveId":"NR3Z9zdD2rW0NG1y3QW735TzykIivP_cnFDMCNX6RcIPh0mRb_6QiC5qy1GrBTIoroorfzaGDIKQ0BY18jbcR3XfEzfcmrZ1FiT1YvQw-c1ag6vT46-noPvmddZ_zyy2O1ItIygI6Q","ArchiveDescription":"hetzner1_20171201-062001.fileList.txt.bz2.gpg","CreationDate":"2018-04-07T21:02:35Z","Size":2333066,"SHA256TreeHash":"9775531693177494a3c515e0b3ab947f4fd6514d12d23cb297ff0b98bc09b1be"},{"ArchiveId":"3wjWOHj9f48-L180QRkg7onI5CbZcmaaqinUYJZRheCox-hc021rQ3Tl1Houf0s5W-qzk6HVRz3wkilQI_TAi2PXWaFUMibz00DAQfGj9ZQKeSOlxE_3qsIRcmYsYo-TMaU2UsSqNA","ArchiveDescription":"hetzner1_20171201-062001.tar.gpg","CreationDate":"2018-04-07T21:55:57Z","Size":12434596732,"SHA256TreeHash":"c10ce8134ffe35ba1e02d6076fc2d98f4bb3a288a5fe051fcb1e33684365ee19"},{"ArchiveId":"OfCmIMVetV8SxOBYUGFWldcHWFaFuGeLrYYm3A4YrvUU93zBrCLkOoBssToY1QIt_ZGwIueTgyoLTADetpfgswaoou_CwD8xfqss1hQAbQ7CaKW6sQHD-kcw4ii-D1h22lap95AZ4g","ArchiveDescription":"hetzner2_20180202-072001.tar.gpg","CreationDate":"2018-04-07T23:39:13Z","Size":14556435291,"SHA256TreeHash":"456b44a88a8485ceaf2080b15f0b6b7e6728caaec6edf86580369dfe91531df9"},{"ArchiveId":"PLs1lsB4c1dV3YaBG1y2SN3OEWmtImJVlz6CA6IknA6y3R8yfQV3FXcLXWC_YpczM6t05xigcynA7m1A6GkuHIyTDOr6-DCOLlEvxDHmFrA4hrzJkl2pLquNWJ9yc-JC83ZV4SkM-Q","ArchiveDescription":"hetzner2_20180302-072001.tar.gpg","CreationDate":"2018-04-08T01:47:51Z","Size":26269217831,"SHA256TreeHash":"c5b96e3419262c4c842bd3319d95dd07c30a1f00d57a2a2a6702d1d502868c98"},{"ArchiveId":"QwTHHmRo-NpqTTe2uy87GgB2MVydnz--3-3Z5u_0gdh5FPxEl2YSyjmJy3CKNDmJaNtrmwLeRF4_GubyZFc-CzlWl6OqZmINkCVSz34wY-k336C8HUOoKm5tPV3riSYaPb7WjjXwNQ","ArchiveDescription":"hetzner1_20170801-052001.tar.gpg","CreationDate":"2018-04-08T09:10:31Z","Size":41165282020,"SHA256TreeHash":"3fca81cf39f315fb2063714fffa526a533a7b3a556a6c4361a6ca3458ed66f29"},{"ArchiveId":"EmeH9kAWeVAyMa68pIknrJ135ZyXKB8WcjVKGQ58cVQE4Q98SMsX1OerOA4-_Q6epBJ_hgUT7ztFQ5d6PNiPRJ3H8uUIqXG3pkve5MaeA_cqAqvu4apBhU2HgALb1iS3NKy5IRdeUg","ArchiveDescription":"hetzner1_20180101-062001.tar.gpg","CreationDate":"2018-04-08T10:22:21Z","Size":12517449646,"SHA256TreeHash":"27e393d0ece9cadafa5902b26e0685c4b6990a95a7a18edbce5687a2e9ed4c55"},{"ArchiveId":"NX074yaGa7FGL_QH5W9-mZ9TVmi0T2428B1lW8aEck6Ydjk3H3W6pgQTisOqE9B7azs1jykJ_IL-fdbkLzhAmrpWNGJBq5hVjfMNSP-Msm976Mf7mnXe6Z6QDkO5PVXaFsNZ1EzNyw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:29:47Z","Size":258332,"SHA256TreeHash":"7541016c23b31391157a4151d9277746585b220afdf1e980cafe205474f796d4"},{"ArchiveId":"uHk-GTBb6LVulxOkgs_ZYdF-cvKubUpvdP7hoS9Cqduw8YPInJaHB4LbBHpIxOL1idfYoMm-h4YI_Jq8qN3EnOBHiAjqUEwJAstagfMEvk2E38IlNLu_5J_09E0JM7MZXc4RSEZfNA","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:35:50Z","Size":7277,"SHA256TreeHash":"f431faa85f3d3910395e012c9eeeba097142d778b80e1d5c0a596970cc9c2fe6"},{"ArchiveId":"n8UslfWy3wmFYZNYJF3PfuxVoLNORVes-IunJoyzKJDYMNqmkwybrG9KVGoL4sbRspq0Tqmccn87hLGZ_A7kjBB6fvnWuAOjALhNinbDe-RkESPVPWN6464vfCIf3BI3NhK0_nzCNw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:35:52Z","Size":258332,"SHA256TreeHash":"2846f515119510f2d03f84d7aadc8648575155d2388251d98160c30d0b67cce8"},{"ArchiveId":"lzlnWYAQWMFp32BM163QS_8kb9wJ_kqaal2XmVb_rXLRDDXhSogYZCanA7oWyi3IdlWECd8R3KT3s50gJo8_kckLtq2uUUjG3Yl1wJuvXQfVh1AwzPOtLlyldqXmDoiVFzw-NrkpIw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:40:35Z","Size":7277,"SHA256TreeHash":"9dd14621a97717597c34c23548da92bcdf68960758203d5a0897360c3211a10c"},{"ArchiveId":"WimEI6ABJtXx6j4jVK4lrVKWbPmI1iPZLErplyB7ChN6MSOH3yMOeANT7L3O6BBI4G17WjSIKE6EN6YgMP9OdgxF4XjHyyGUuWNwqy-nnIETKyp7YrFuuBkSiSloBhZkC6DRqpdrww","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:40:38Z","Size":258332,"SHA256TreeHash":"81c9683d541ede1b4646b19b2b3fa9be9891251a9599d2544392a5780edaa234"},{"ArchiveId":"k-Q9oBnWeC3P7zOEN6IMEVFjl3EwPkqi5kbEvEqby4zKEpb_aDj4f88Us1X7QBvG3Pi8GUriEnNlXXlNH5s4-4cBfQryVjY_MOAnSakhgCLXs-srczsWIZvtkkMsh4XFiBpVzYao3w","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:42:46Z","Size":7259,"SHA256TreeHash":"0aeed31f85186f67b0a5f457d4fbfe3d82e27fc8ccb5df8289a44602cb2b2b18"},{"ArchiveId":"Y7_nQQC2uSB7XXEfd_zaKpp_gqGPZ_TQTXDPLmSP8k77n9NImLnTL7apkE6AlJopAkgmPiLOaTgIXc4_mSkUFp5teSOxdPxk19Cvs2fL9S1Yv5U7wihZfrsrwNffyZl289J59G-UBg","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:42:49Z","Size":258332,"SHA256TreeHash":"c5b99e4ad85ce2eddda44a72243a69fa750ae25fdcffa26a70dcabfa1f4218c8"},{"ArchiveId":"zW6rdGwDojNoD-CjYUf8p_tbX2UMPHedXwUAM4GxNRkO0GoE1Ax5rpGr38LTnzZ_rCX-4F3kdJiAm1ahm-CfAzefUxenayuoS6cg384s5UHbZGsD2QpogBj9EJDDWlzrj8hr8DPC1Q","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg","CreationDate":"2018-07-17T00:45:28Z","Size":7259,"SHA256TreeHash":"dcf5c7bfbdeb25d789671a951935bf5797212963c582099c5d9da75ae1ecfccd"},{"ArchiveId":"W9argz7v3GxZUkItGwILf1QRot8BNbE4kOJVvUrwOGs72KGog0QCGc8PV-3cWUvhfxkCFLuoZE7qJCQmT2Cc_LvaV46hWFFvgs5TFBdIySr2jeil-d8cYR5oN9zAvkYCGuvDlXmgxw","ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg","CreationDate":"2018-07-17T00:45:31Z","Size":258328,"SHA256TreeHash":"75b3cdc350fb05d69874584ed6f3374a32b9142267ca916a0db6fc535711f24a"}]}[root@hetzner2 ~]# 
  1. some bash magic makes the json easier to parse, and we see that my test file is there, but--as I was adjusting the script--I accidentally re-uploaded it 4 and a half times. I doesn't make sense to delete these archives, and that's why I tested the first upload with the smallest one = 260KB. So this redundancy is a negligible cost https://stackoverflow.com/questions/1955505/parsing-json-with-sed-and-awk
[root@hetzner2 ~]# cat output.json | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020"
"InventoryDate":"2018-07-17T08:46:39Z"
"ArchiveList":["ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA"
"ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name"
"CreationDate":"2018-03-31T02:35:48Z"
"Size":380236
"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"
...
"ArchiveId":"QwTHHmRo-NpqTTe2uy87GgB2MVydnz--3-3Z5u_0gdh5FPxEl2YSyjmJy3CKNDmJaNtrmwLeRF4_GubyZFc-CzlWl6OqZmINkCVSz34wY-k336C8HUOoKm5tPV3riSYaPb7WjjXwNQ"
"ArchiveDescription":"hetzner1_20170801-052001.tar.gpg"
"CreationDate":"2018-04-08T09:10:31Z"
"Size":41165282020
"SHA256TreeHash":"3fca81cf39f315fb2063714fffa526a533a7b3a556a6c4361a6ca3458ed66f29"
"ArchiveId":"EmeH9kAWeVAyMa68pIknrJ135ZyXKB8WcjVKGQ58cVQE4Q98SMsX1OerOA4-_Q6epBJ_hgUT7ztFQ5d6PNiPRJ3H8uUIqXG3pkve5MaeA_cqAqvu4apBhU2HgALb1iS3NKy5IRdeUg"
"ArchiveDescription":"hetzner1_20180101-062001.tar.gpg"
"CreationDate":"2018-04-08T10:22:21Z"
"Size":12517449646
"SHA256TreeHash":"27e393d0ece9cadafa5902b26e0685c4b6990a95a7a18edbce5687a2e9ed4c55"
"ArchiveId":"NX074yaGa7FGL_QH5W9-mZ9TVmi0T2428B1lW8aEck6Ydjk3H3W6pgQTisOqE9B7azs1jykJ_IL-fdbkLzhAmrpWNGJBq5hVjfMNSP-Msm976Mf7mnXe6Z6QDkO5PVXaFsNZ1EzNyw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:29:47Z"
"Size":258332
"SHA256TreeHash":"7541016c23b31391157a4151d9277746585b220afdf1e980cafe205474f796d4"
"ArchiveId":"uHk-GTBb6LVulxOkgs_ZYdF-cvKubUpvdP7hoS9Cqduw8YPInJaHB4LbBHpIxOL1idfYoMm-h4YI_Jq8qN3EnOBHiAjqUEwJAstagfMEvk2E38IlNLu_5J_09E0JM7MZXc4RSEZfNA"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-17T00:35:50Z"
"Size":7277
"SHA256TreeHash":"f431faa85f3d3910395e012c9eeeba097142d778b80e1d5c0a596970cc9c2fe6"
"ArchiveId":"n8UslfWy3wmFYZNYJF3PfuxVoLNORVes-IunJoyzKJDYMNqmkwybrG9KVGoL4sbRspq0Tqmccn87hLGZ_A7kjBB6fvnWuAOjALhNinbDe-RkESPVPWN6464vfCIf3BI3NhK0_nzCNw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:35:52Z"
"Size":258332
"SHA256TreeHash":"2846f515119510f2d03f84d7aadc8648575155d2388251d98160c30d0b67cce8"
"ArchiveId":"lzlnWYAQWMFp32BM163QS_8kb9wJ_kqaal2XmVb_rXLRDDXhSogYZCanA7oWyi3IdlWECd8R3KT3s50gJo8_kckLtq2uUUjG3Yl1wJuvXQfVh1AwzPOtLlyldqXmDoiVFzw-NrkpIw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-17T00:40:35Z"
"Size":7277
"SHA256TreeHash":"9dd14621a97717597c34c23548da92bcdf68960758203d5a0897360c3211a10c"
"ArchiveId":"WimEI6ABJtXx6j4jVK4lrVKWbPmI1iPZLErplyB7ChN6MSOH3yMOeANT7L3O6BBI4G17WjSIKE6EN6YgMP9OdgxF4XjHyyGUuWNwqy-nnIETKyp7YrFuuBkSiSloBhZkC6DRqpdrww"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:40:38Z"
"Size":258332
"SHA256TreeHash":"81c9683d541ede1b4646b19b2b3fa9be9891251a9599d2544392a5780edaa234"
"ArchiveId":"k-Q9oBnWeC3P7zOEN6IMEVFjl3EwPkqi5kbEvEqby4zKEpb_aDj4f88Us1X7QBvG3Pi8GUriEnNlXXlNH5s4-4cBfQryVjY_MOAnSakhgCLXs-srczsWIZvtkkMsh4XFiBpVzYao3w"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-17T00:42:46Z"
"Size":7259
"SHA256TreeHash":"0aeed31f85186f67b0a5f457d4fbfe3d82e27fc8ccb5df8289a44602cb2b2b18"
"ArchiveId":"Y7_nQQC2uSB7XXEfd_zaKpp_gqGPZ_TQTXDPLmSP8k77n9NImLnTL7apkE6AlJopAkgmPiLOaTgIXc4_mSkUFp5teSOxdPxk19Cvs2fL9S1Yv5U7wihZfrsrwNffyZl289J59G-UBg"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:42:49Z"
"Size":258332
"SHA256TreeHash":"c5b99e4ad85ce2eddda44a72243a69fa750ae25fdcffa26a70dcabfa1f4218c8"
"ArchiveId":"zW6rdGwDojNoD-CjYUf8p_tbX2UMPHedXwUAM4GxNRkO0GoE1Ax5rpGr38LTnzZ_rCX-4F3kdJiAm1ahm-CfAzefUxenayuoS6cg384s5UHbZGsD2QpogBj9EJDDWlzrj8hr8DPC1Q"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg"
"CreationDate":"2018-07-17T00:45:28Z"
"Size":7259
"SHA256TreeHash":"dcf5c7bfbdeb25d789671a951935bf5797212963c582099c5d9da75ae1ecfccd"
"ArchiveId":"W9argz7v3GxZUkItGwILf1QRot8BNbE4kOJVvUrwOGs72KGog0QCGc8PV-3cWUvhfxkCFLuoZE7qJCQmT2Cc_LvaV46hWFFvgs5TFBdIySr2jeil-d8cYR5oN9zAvkYCGuvDlXmgxw"
"ArchiveDescription":"hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg"
"CreationDate":"2018-07-17T00:45:31Z"
"Size":258328
"SHA256TreeHash":"75b3cdc350fb05d69874584ed6f3374a32b9142267ca916a0db6fc535711f24a"]
[root@hetzner2 ~]# 
  1. anyway, I'm satisfied that the upload script is working. I went ahead and executed it to upload the remaining archives. Hopefully by next week I can validate that those have been uploaded too.
  1. ...
  1. continuing with duplicity, I tried to pass the '--batch --password-file' options to gpg so that I could use a 4K symmetric key file rather than enter a unicode-limited (ascii-limited?) passphrase. It didn't work. I found a bug report from 8 years ago about the issue. I asked for a status updated. https://bugs.launchpad.net/duplicity/+bug/503305
[root@hetzner2 ~]# duplicity --gpg-options '--batch --passphrase-file /root/backups/ose-backups-cron.key' /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg b2://<obfuscated>:<obfuscated>@ose-server-backups/
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
GnuPG passphrase: 
  1. that ^ ticket says the workaround is to just cat the file to a var named "PASSPHRASE"
[root@hetzner2 ~]# export PASSPHRASE="`cat /root/backups/ose-backups-cron.key`"
[root@hetzner2 ~]# duplicity /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg b2://<obfuscated>:<obfuscated>@ose-server-backups/
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1532034857.35 (Thu Jul 19 21:14:17 2018)
EndTime 1532034857.35 (Thu Jul 19 21:14:17 2018)
ElapsedTime 0.00 (0.00 seconds)
SourceFiles 1
SourceFileSize 7642 (7.46 KB)
NewFiles 1
NewFileSize 7642 (7.46 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 1
RawDeltaSize 7642 (7.46 KB)
TotalDestinationSizeChange 7873 (7.69 KB)
Errors 0
-------------------------------------------------

[root@hetzner2 ~]# 
  1. ^ that appears to have worked. I'll attempt a restore next week.

Wed Jul 18, 2018

  1. hetzner got back to be again about hetzner1; they said they can turn off the server, but we have no means to toggle the power state in konsoleH. I told them to turn it off, but to be sure to leave it idle/unused so that we can power it on incase we discover any issues. I told them to *not* cancel the contract, but that we would do so after we discovered that nothing broke on our end with the server offline.
    1. I could probably have them type a set of iptables rules in for me, but the lag time for each back-and-forth with hetzner is 24 hours. So if there were any issues with the iptables rules, it could take days to resolve. It's much easier to say "turn on/off the server" than to instruct them on iptables rules. The benefits of iptables are the low-impact of its action & quick reversal with the rules flush `iptables -F`. But that benefit negated by this delay, so I'll just stick to "turn it off" and "turn it on" requests.
  2. meanwhile, as far as the backups of hetzner1. The first archive is still being validated. It looks like the job to snc the vault's contents finished, but it's still listed as empty!
[root@hetzner2 ~]# glacier.py --region us-west-2 job list
[root@hetzner2 ~]# glacier.py --region us-west-2 vault list
deleteMeIn2020
[root@hetzner2 ~]# glacier.py --region us-west-2 archive list deleteMeIn2020
[root@hetzner2 ~]# date
Wed Jul 18 18:44:14 UTC 2018
[root@hetzner2 ~]# 
  1. I logged into the aws console, which shows that there's 55 archives with 270.5G as of the last inventory of Jul 17, 2018 4:46:39 AM
  2. perhaps the issue is that I just initiated the sync job before the archive
  3. I checked my logs from when I was working with glacier.py in the past on hancock, and tried those commands, but it also returned an empty set https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q1#Sat_Mar_31.2C_2018
hancock% cd ~/sandbox/glacier-cli 
hancock% export AWS_ACCESS_KEY_ID='CHANGME'
hancock% export AWS_SECRET_ACCESS_KEY='CHANGEME'
hancock% /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py archive list deleteMeIn2020
hancock% 
  1. following the documentation page I wrote, I tried again. This time using the '--max-age=0' arg omitted last time https://wiki.opensourceecology.org/wiki/OSE_Server
[root@hetzner2 ~]# glacier.py --region us-west-2 vault sync --max-age=0 --wait deleteMeIn2020

Tue Jul 17, 2018

  1. hetzner got back to me asking about dropping all non-ssh traffic via iptables. They advised me against it saying there was no security benefit to doing so (lol) , and said I had no means of doing it without their assistance. I responded saying that we indeed _do_ want to drop all non-ssh traffic with iptables as it's part of our decommissioning process. I asked what options I had based on our needs.
Hi Emanuel,

Yes, we would like to block all non-ssh traffic to the server, not just
some subset of services such as databases.

We are in the process of decommissioning this server. We believe that we
have finished migrating all necessary services off of this server. To be
safe, I would like to leave the server in an online state with all
non-ssh traffic blocked via iptables for about a month prior to
canceling its contract.

I prefer iptables because--if we discover that something breaks after we
drop all non-ssh traffic--we can merely flush the iptables rules or stop
the iptables service to fully restore the server to its prior state.

The next best option would be to turn the server off, but I saw no means
to control firewall rules or even toggle the power on the server from
konsoleH.

Is there a way to enable a firewall on this server or turn it off for
about a month before we cancel its contract? Or is there some other
option we have for making this server entirely inaccessible to the
Internet (other than ssh), but such that we can easily restore the
server to its prior fully-functioning state if we discover a need for it?


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org

On 07/17/2018 03:34 AM, Managed Server Support wrote:
> Dear Mr. Jakubowski
>
> Closing all ports on the managed server is barely not possible. There is no need for security reasons to do this. We cant disable the MySQL and PostgreSQL port if you want so.
>
>> Please tell me how I can configure iptables on our server to obtain the
>> above requirements.
> You cant do this by your own.
>
>
> If we can be of any further assistance, please let us know.
>
>
> Mit freundlichen Grüßen / Kind regards
>
> Emanuel Wiesner
>
> Hetzner Online GmbH
> Industriestr. 25
> 91710 Gunzenhausen / Germany
> Tel: +49 9831 505-0
> Fax: +49 9831 505-3
> www.hetzner.com
>
> Registergericht Ansbach, HRB 6089
> Geschäftsführer: Martin Hetzner
>
> 16.07.2018 20:51 - elifarley@opensourceecology.org marcin@opensourceecology.org catarina@openmaterials.org tom.griffing@gmail.com michael@opensourceecology.org schrieb:
>
  1. I tried to get a listing of the archives in the glacier vault, but it was still empty. it looks like the inventory job is still pending (since 15 hours ago)..
[root@hetzner2 ~]# glacier.py --region us-west-2 vault list
deleteMeIn2020
[root@hetzner2 ~]# glacier.py --region us-west-2 archive list deleteMeIn2020
[root@hetzner2 ~]# 
[root@hetzner2 ~]# glacier.py --region us-west-2 job list
i/d 2018-07-17T04:28:46.003Z deleteMeIn2020 
[root@hetzner2 ~]# date
Tue Jul 17 19:32:59 UTC 2018
[root@hetzner2 ~]# 
  1. ...
  1. marcin got back to me about Janus. He said he couldn't get jangouts to work. I tried, and I get the error "Janus error: Probably a network error, is the gateway down?: [object Object] Do you want to reload in order to retry?". I think this is because the janus gateway is still running on the old self-signed cert, instead of the let's encrypt cert (which I did setup nginx to use)
  2. I changed the certs in both files at /opt/janus/etc/janus/{janus.cfg,janus.transport.http.cfg}
[root@ip-172-31-28-115 janus]# date
Tue Jul 17 20:03:50 UTC 2018
[root@ip-172-31-28-115 janus]# pwd
/opt/janus/etc/janus
[root@ip-172-31-28-115 janus]# grep 'cert_' *.cfg
janus.cfg:;cert_pem = /opt/janus/share/janus/certs/mycert.pem
janus.cfg:;cert_key = /opt/janus/share/janus/certs/mycert.key
janus.cfg:cert_pem = /etc/letsencrypt/live/jangouts.opensourceecology.org/fullchain.pem;
janus.cfg:cert_key = /etc/letsencrypt/live/jangouts.opensourceecology.org/privkey.pem;
janus.cfg:;cert_pwd = secretpassphrase
janus.transport.http.cfg:;cert_pem = /opt/janus/share/janus/certs/mycert.pem
janus.transport.http.cfg:;cert_key = /opt/janus/share/janus/certs/mycert.key
janus.transport.http.cfg:cert_pem = /etc/letsencrypt/live/jangouts.opensourceecology.org/fullchain.pem;
janus.transport.http.cfg:cert_key = /etc/letsencrypt/live/jangouts.opensourceecology.org/privkey.pem;
janus.transport.http.cfg:;cert_pwd = secretpassphrase
[root@ip-172-31-28-115 janus]# 
  1. ...
  1. I'm still downloading the OSE Linux iso image (going on a few days of connecting, disconnecting, & re-continuing the `wget -c`). This sort of thing can lead to corruption, but we don't have any checksums published. Thus, I updated our OSE Linux wiki article, adding a TODO item for checksum files and cryptographic signature files of those checksum files, per the standard used by debian & ubuntu https://wiki.opensourceecology.org/wiki/OSE_Linux#Hashes_.26_Signatures
  2. I sent Christian an email about this request
  1. ...
  1. I checked the `tail -f` I left open on the /var/log/maillog since adding the IPv6 PTR = RDNS entry in attempt to resolve the bounced emails when sending to gmail users from our wiki. It appears that this issue has not re-occured since I started monitoring this 5 days ago
  2. there was one line that repeated many dozen times over those 5 days with 'status=bounced', and it looks like this; not sure if this is an issue or what caused it
Jul 13 06:20:10 hetzner2 postfix/local[6631]: D087E681EA6: to=<root@hetzner2.opensourceecology.org>, relay=local, delay=0.01, delays=0.01/0/0/0.01, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" )
  1. ugh, actually, that last entry was 4 days ago. So the tail stopped producing output after 1 day.
  2. I did a better test: grepping (or zgrepping, for the compressed files) for blocked (removing the unrelated issue as pointed out above). From this test, it's very clear that there was many issues with google bouncing our emails per day, but then (after my change), the issue ceased.
[root@hetzner2 log]# grep -i 'status=bounced' maillog-20180717 | grep -vi 'create user output file'
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180716.gz | grep -vi 'create user output file'
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180715.gz | grep -vi 'create user output file'
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180714.gz | grep -vi 'create user output file'
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180713.gz | grep -vi 'create user output file'
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180712.gz | grep -vi 'create user output file'
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180711.gz | grep -vi 'create user output file'
Jul 10 19:26:05 hetzner2 postfix/smtp[19740]: 2E818681EA6: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c06::1a]:25, delay=0.55, delays=0.02/0/0.09/0.43, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c06::1a] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . y47-v6si14908216wrd.389 - gsmtp (in reply to end of DATA command))
Jul 10 19:26:05 hetzner2 postfix/smtp[19741]: 31112681DE7: to=<marcin@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c0b::1b]:25, delay=0.6, delays=0.02/0/0.08/0.49, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c0b::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . a9-v6si71749wme.40 - gsmtp (in reply to end of DATA command))
Jul 10 19:26:06 hetzner2 postfix/smtp[19740]: B3CB1681EA4: to=<noreply@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c06::1a]:25, delay=0.55, delays=0.01/0/0.05/0.48, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c06::1a] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . t9-v6si9165105wrq.111 - gsmtp (in reply to end of DATA command))
Jul 10 19:26:45 hetzner2 postfix/smtp[19740]: 324DB681EA6: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c0b::1b]:25, delay=0.26, delays=0.02/0/0.05/0.19, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c0b::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . a4-v6si49801wmc.231 - gsmtp (in reply to end of DATA command))
Jul 10 19:26:45 hetzner2 postfix/smtp[19741]: 718C1681DE9: to=<noreply@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c08::1b]:25, delay=0.2, delays=0.01/0/0.05/0.14, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c08::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . 89-v6si9030402wrl.156 - gsmtp (in reply to end of DATA command))
Jul 10 20:09:22 hetzner2 postfix/smtp[25087]: 1C819681EA4: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c07::1a]:25, delay=0.37, delays=0.02/0/0.09/0.26, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c07::1a] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . o83-v6si104445wmo.206 - gsmtp (in reply to end of DATA command))
Jul 10 20:21:03 hetzner2 postfix/smtp[27148]: 9622F681EA6: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c06::1b]:25, delay=0.49, delays=0.02/0/0.06/0.41, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c06::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . g2-v6si121400wmc.195 - gsmtp (in reply to end of DATA command))
Jul 10 20:21:03 hetzner2 postfix/smtp[27149]: 17F89681DE9: to=<noreply@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c00::1b]:25, delay=0.17, delays=0.01/0/0.05/0.11, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c00::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . o16-v6si139472wme.45 - gsmtp (in reply to end of DATA command))
[root@hetzner2 log]# 
  1. going back further, days we see that this did happen 'nearly' every day between 0-51 times per day
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180711.gz | grep -vi 'create user output file' | wc -l
8
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180710.gz | grep -vi 'create user output file' | wc -l
19
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180709.gz | grep -vi 'create user output file' | wc -l
9
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180708.gz | grep -vi 'create user output file' | wc -l
20
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180707.gz | grep -vi 'create user output file' | wc -l
51
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180706.gz | grep -vi 'create user output file' | wc -l
27
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180705.gz | grep -vi 'create user output file' | wc -l
10
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180704.gz | grep -vi 'create user output file' | wc -l
13
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180703.gz | grep -vi 'create user output file' | wc -l
15
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180702.gz | grep -vi 'create user output file' | wc -l
0
[root@hetzner2 log]# zgrep -i 'status=bounced' maillog-20180701.gz | grep -vi 'create user output file' | wc -l
0
[root@hetzner2 log]# 
  1. a better test of all the files shows 0-101 of these errors happening per day over the past 31 days. 11/31 days had no errors.
[root@hetzner2 log]# date
Tue Jul 17 21:10:42 UTC 2018
[root@hetzner2 log]# pwd
/var/log
[root@hetzner2 log]# for file in $(ls -1 maillog*.gz); do echo $file; zcat $file | grep -i 'status=bounced' | grep -vi 'create user output file' | wc -l; done
maillog-20180616.gz
87
maillog-20180617.gz
57
maillog-20180618.gz
14
maillog-20180619.gz
37
maillog-20180620.gz
0
maillog-20180621.gz
0
maillog-20180622.gz
4
maillog-20180623.gz
48
maillog-20180624.gz
68
maillog-20180625.gz
5
maillog-20180626.gz
0
maillog-20180627.gz
20
maillog-20180628.gz
101
maillog-20180629.gz
9
maillog-20180630.gz
0
maillog-20180701.gz
0
maillog-20180702.gz
0
maillog-20180703.gz
15
maillog-20180704.gz
13
maillog-20180705.gz
10
maillog-20180706.gz
27
maillog-20180707.gz
51
maillog-20180708.gz
20
maillog-20180709.gz
9
maillog-20180710.gz
19
maillog-20180711.gz
8
maillog-20180712.gz
0
maillog-20180713.gz
0
maillog-20180714.gz
0
maillog-20180715.gz
0
maillog-20180716.gz
0
[root@hetzner2 log]# 
  1. this test should probably be re-done in 31 days. If it's all 0s, then I think we can say the problem has been resolved.
  1. ...
  1. I reset the backblaze@opensourceecology.org account's password, stored it to keepass, created an account on backblaze.com using this email address, and stored _that_ to keepass.
  2. I logged into backblaze, and the first fucking thing they say is that we need to associate the account with a phone #. The reason I use a shared email address is in case they want to use 2FA via email. 2FA via phone doesn't work between Marcin & Myself (or other sysadmins). Not to mention that 2FA over SMS is fucking stupidly insecure. And I don't have a fucking phone to being with! https://help.backblaze.com/hc/en-us/articles/218513467-Mobile-Phone-Verification-for-B2-
  3. I went to send Backblaze an email asking if I can use their service with TOTP instead of the fucking security theater of 2FA over sms, but I couldn't send it without first solving a captcha. Of course, I block all google domains in my browser, so I had to switch to an ephemeral vm just to use their account. God damn it, this is not a great first experience.
Hi,

Our org is looking to switch from Glacier to B2 for our daily 15G of server backups, but I do not have a phone number. How can I use Backblaze B2?

We use 2FA for most of our accounts where possible, but we use TOTP (time-based one time passwords), a very well accepted standard which is supported by many open source apps (ie: andOTP, FreeOTP, Google Authenticator, etc).

In fact, 2FA over SMS (or email) is a horrible idea as it sends the token through an insecure medium. Indeed, it is trivial to intercept a message over SMS or email.

I do not own a phone number. Moreover, we're a geographically dispersed team; if I registered this account with one of my coworker's phone numbers, how would I be able to login when their phone in Germany or rural Missouri or Austin has the token I need to login?

Please let me know if it's possible for me to use this service for our backups without a phone number.


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org
  1. I read some scary things about limits to file extensions and low max file sizes on backblaze reviews. This applies to the desktop service only. The cloud (B2) service file size limit is 10T, which is well within our requirements https://www.backblaze.com/b2/docs/large_files.html
  2. like amazon, large files do need to be uploaded in parts. I avoided this api hell by using glacier.py. It does appear that the backblaze python cli tool only has a simple 'upload-file' function, which abstracts the multi-part upload api calls for you https://www.backblaze.com/b2/docs/quick_command_line.html
  3. backblaze got back to me stating that the phone number is required, but that we can switch to TOTP after registering a phone number. That works I guess, though their incompetence is still showing..
  4. I added my google hangouts number to the account, then changed 2FA to disable 2FA via sms & enable 2FA via TOTP. I added the totp secret key to keepass with 10 pre-gen'd tokens.
  5. fucking hell. I went to test it by logging into my account from another vm. It let me login without entering any tokens! I confirmed that 2fa settings say to use it "Every time I sign in". Fail.
    1. not only does that show Backblaze's incompetence by totally bypassing our 2FA settings, but it also means that I can't validate that my 2FA token on my phone (and backed-up to keepass) is valid. So as soon as I log off on both VMs, and try to login later when it _does_ require me to enter a token, I could have locked myself out.
    2. the only reason I'm still considering using BackBlaze is because all of our data shipped to them will be encrypted. Therefore, they can be security dumdums (but advertise to be gurus; fucking marketing) and it doesn't affect us (this is very important data, though). As long as they stay durable (they claim 99.999999999%) https://www.backblaze.com/security.html
    3. actually, if I'm running their shitty code as root, that's an issue. It may be a good idea to have the upload be performed by a lower-privileged backblaze user after encryption by root.
    4. I thought maybe I could get past this with duplicity, but it turns out that duplicity just uses the b2 api. https://bazaar.launchpad.net/~duplicity-team/duplicity/0.8-series/view/head:/duplicity/backends/b2backend.py
    5. duplicity has included b2 support since 0.7.12, and I confirmed that this is already in the yum repo
[maltfield@hetzner2 ~]$ sudo yum install duplicity
...
==============================================================================================================================
 Package                                 Arch                 Version                              Repository            Size
==============================================================================================================================
Installing:
 duplicity                               x86_64               0.7.17-1.el7                         epel                 553 k
Installing for dependencies:
 PyYAML                                  x86_64               3.10-11.el7                          base                 153 k
 librsync                                x86_64               1.0.0-1.el7                          epel                  53 k
 libyaml                                 x86_64               0.1.4-11.el7_0                       base                  55 k
 ncftp                                   x86_64               2:3.2.5-7.el7                        epel                 340 k
 pexpect                                 noarch               2.3-11.el7                           base                 142 k
 python-GnuPGInterface                   noarch               0.3.2-11.el7                         epel                  26 k
 python-fasteners                        noarch               0.9.0-2.el7                          epel                  35 k
 python-httplib2                         noarch               0.9.2-1.el7                          extras               115 k
 python-lockfile                         noarch               1:0.9.1-4.el7.centos                 extras                28 k
 python-paramiko                         noarch               2.1.1-4.el7                          extras               268 k
 python2-PyDrive                         noarch               1.3.1-3.el7                          epel                  49 k
 python2-gflags                          noarch               2.0-5.el7                            epel                  61 k
 python2-google-api-client               noarch               1.6.3-1.el7                          epel                  87 k
 python2-keyring                         noarch               5.0-3.el7                            epel                 115 k
 python2-oauth2client                    noarch               4.0.0-2.el7                          epel                 144 k
 python2-pyasn1-modules                  noarch               0.1.9-7.el7                          base                  59 k
 python2-uritemplate                     noarch               3.0.0-1.el7                          epel                  18 k

Transaction Summary
==============================================================================================================================
Install  1 Package (+17 Dependent packages)

Total download size: 2.2 M
Installed size: 9.4 M
...
[maltfield@hetzner2 ~]$ 
  1. did some more research about duplicity + b2
    1. https://help.backblaze.com/hc/en-us/articles/115001518354-How-to-configure-Backblaze-B2-with-Duplicity-on-Linux
    2. https://www.loganmarchione.com/2017/07/backblaze-b2-backup-setup/
    3. the benefit here is that (as the second link above shows) we can have duplicity make small incremental backups all month, then just do a full backup after 30 days. It also does encryption for you using gpg.
    4. the disadvantage is dependency on duplicity. If someone doesn't know duplicity, they'll probably just login to the backblaze wui (where you can download files directly using your web browser), but this data (incrementals anyway) will be useless unless they use the duplicity tool
      1. I would be less concerned about this if we could make the backups from the 1st of every month be a complete backup.
    5. if we use duplicity with incrementals, then it necessarily uses a lot of temp space on the server; this is even listed as a common complaint in there 2-question FAQ http://duplicity.nongnu.org/FAQ.html
    6. another disadvantage is that duplicity (which does both the encryption & upload) would necessarily execute the b2 api code as root. But if I hacked it into my script, I could encrypt as root then call the backblaze cli tool as an underprivliged user
    7. spent some time reading up on the duplicity manual http://duplicity.nongnu.org/duplicity.1.html
  2. I do think it's better to reuse a popular tool like duplicity rather than recreating the wheel here, so I'll give it a try
[root@hetzner2 ~]# yum install duplicity
...
Complete!
[root@hetzner2 ~]# 
  1. I created a bucket named 'ose-server-backups'
  2. expecting failure, I tried to upload to b2
[root@hetzner2 sync]# duplicity hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg b2://<obfuscated>:<obfuscated>@ose-server-backups/
BackendException: B2 backend requires B2 Python APIs (pip install b2)
[root@hetzner2 sync]# pip install b2
-bash: pip: command not found
[root@hetzner2 sync]# 
  1. I installed pip (python2-pip)
[root@hetzner2 sync]# yum install python-pip
Loaded plugins: fastestmirror, replace
Loading mirror speeds from cached hostfile
 * base: mirror.wiuwiu.de
 * epel: mirror.wiuwiu.de
 * extras: mirror.wiuwiu.de
 * updates: mirror.wiuwiu.de
 * webtatic: uk.repo.webtatic.com
Resolving Dependencies
--> Running transaction check
---> Package python2-pip.noarch 0:8.1.2-6.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================
 Package                         Arch                       Version                            Repository                Size
==============================================================================================================================
Installing:
 python2-pip                     noarch                     8.1.2-6.el7                        epel                     1.7 M

Transaction Summary
==============================================================================================================================
Install  1 Package

Total download size: 1.7 M
Installed size: 7.2 M
Is this ok [y/d/N]: y
Downloading packages:
python2-pip-8.1.2-6.el7.noarch.rpm                                                                     | 1.7 MB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : python2-pip-8.1.2-6.el7.noarch                                                                             1/1 
  Verifying  : python2-pip-8.1.2-6.el7.noarch                                                                             1/1 

Installed:
  python2-pip.noarch 0:8.1.2-6.el7                                                                                            

Complete!
[root@hetzner2 sync]# 
  1. unsurprisingly, shitty pip failed
[root@hetzner2 sync]# pip install b2
Collecting b2
  Downloading https://files.pythonhosted.org/packages/c2/1c/c52039c3c3dcc3d1a9725cef7523c4b50abbb967068a1ea40f28cd9978f5/b2-1.2.0.tar.gz (99kB)
	100% || 102kB 4.5MB/s 
	Complete output from command python setup.py egg_info:
	setuptools 20.2 or later is required. To fix, try running: pip install "setuptools>=20.2"
	
	----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-7XvsdJ/b2/
You are using pip version 8.1.2, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[root@hetzner2 sync]# 
  1. the upgrade worked
[root@hetzner2 sync]# pip install --upgrade pip
Collecting pip
  Downloading https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl (1.3MB)
	100% || 1.3MB 1.2MB/s 
Installing collected packages: pip
  Found existing installation: pip 8.1.2
	Uninstalling pip-8.1.2:
	  Successfully uninstalled pip-8.1.2
Successfully installed pip-10.0.1
[root@hetzner2 sync]# 
  1. but it still failed again
[root@hetzner2 sync]# pip install b2
Collecting b2
  Using cached https://files.pythonhosted.org/packages/c2/1c/c52039c3c3dcc3d1a9725cef7523c4b50abbb967068a1ea40f28cd9978f5/b2-1.2.0.tar.gz
	Complete output from command python setup.py egg_info:
	setuptools 20.2 or later is required. To fix, try running: pip install "setuptools>=20.2"
	
	----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-mCAFuT/b2/
[root@hetzner2 sync]# 
  1. preferring the system's package manager over pip, I tried to install setuptools, but it was already installed
[root@hetzner2 sync]# yum install python-setuptools
Loaded plugins: fastestmirror, replace
Loading mirror speeds from cached hostfile
 * base: mirror.wiuwiu.de
 * epel: mirror.wiuwiu.de
 * extras: mirror.wiuwiu.de
 * updates: mirror.wiuwiu.de
 * webtatic: uk.repo.webtatic.com
Package python-setuptools-0.9.8-7.el7.noarch already installed and latest version
Nothing to do
[root@hetzner2 sync]# 
  1. biting my lip in anxious fear, I proceeded with the install of setuptools from pip
[root@hetzner2 sync]# pip install "setuptools>=20.2"
Collecting setuptools>=20.2
  Downloading https://files.pythonhosted.org/packages/ff/f4/385715ccc461885f3cedf57a41ae3c12b5fec3f35cce4c8706b1a112a133/setuptools-40.0.0-py2.py3-none-any.whl (567kB)
	100% || 573kB 11.7MB/s 
Installing collected packages: setuptools
  Found existing installation: setuptools 0.9.8
	Uninstalling setuptools-0.9.8:
	  Successfully uninstalled setuptools-0.9.8
Successfully installed setuptools-40.0.0
[root@hetzner2 sync]# 
  1. this made it further, but still failed
[root@hetzner2 sync]# pip install b2
Collecting b2
  Using cached https://files.pythonhosted.org/packages/c2/1c/c52039c3c3dcc3d1a9725cef7523c4b50abbb967068a1ea40f28cd9978f5/b2-1.2.0.tar.gz
Collecting arrow<0.12.1,>=0.8.0 (from b2)
  Downloading https://files.pythonhosted.org/packages/90/48/7ecfce4f2830f59dfacbb2b5a31e3ff1112b731a413724be40f57faa4450/arrow-0.12.0.tar.gz (89kB) 
	100% || 92kB 2.3MB/s
Collecting logfury>=0.1.2 (from b2)
  Downloading https://files.pythonhosted.org/packages/55/71/c70df1ef41721b554c91982ebde423a5cf594261aa5132e39ade9196fa3b/logfury-0.1.2-py2.py3-none-any.whl
Collecting requests>=2.9.1 (from b2)
  Downloading https://files.pythonhosted.org/packages/65/47/7e02164a2a3db50ed6d8a6ab1d6d60b69c4c3fdf57a284257925dfc12bda/requests-2.19.1-py2.py3-none-any.whl (91kB)
	100% || 92kB 3.4MB/s
Collecting six>=1.10 (from b2)
  Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Collecting tqdm>=4.5.0 (from b2)
  Downloading https://files.pythonhosted.org/packages/93/24/6ab1df969db228aed36a648a8959d1027099ce45fad67532b9673d533318/tqdm-4.23.4-py2.py3-none-any.whl (42kB)
	100% || 51kB 6.9MB/s
Collecting futures>=3.0.5 (from b2)
  Downloading https://files.pythonhosted.org/packages/2d/99/b2c4e9d5a30f6471e410a146232b4118e697fa3ffc06d6a65efde84debd0/futures-3.2.0-py2-none-any.whl
Collecting python-dateutil (from arrow<0.12.1,>=0.8.0->b2)
  Downloading https://files.pythonhosted.org/packages/cf/f5/af2b09c957ace60dcfac112b669c45c8c97e32f94aa8b56da4c6d1682825/python_dateutil-2.7.3-py2.py3-none-any.whl (211kB)
	100% || 215kB 4.4MB/s
Collecting backports.functools_lru_cache==1.2.1 (from arrow<0.12.1,>=0.8.0->b2)
  Downloading https://files.pythonhosted.org/packages/d1/0e/c473e3c37c34fea699d85d5b9e3caf712813c4cd2dcc0a5a64ec2a6867f7/backports.functools_lru_cache-1.2.1-py2.py3-none-any.whl
Collecting funcsigs (from logfury>=0.1.2->b2)
  Downloading https://files.pythonhosted.org/packages/69/cb/f5be453359271714c01b9bd06126eaf2e368f1fddfff30818754b5ac2328/funcsigs-1.0.2-py2.py3-none-any.whl
Collecting certifi>=2017.4.17 (from requests>=2.9.1->b2)
  Downloading https://files.pythonhosted.org/packages/7c/e6/92ad559b7192d846975fc916b65f667c7b8c3a32bea7372340bfe9a15fa5/certifi-2018.4.16-py2.py3-none-any.whl (150kB)
	100% || 153kB 5.4MB/s
Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.9.1->b2)
  Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
	100% || 143kB 5.5MB/s
Collecting urllib3<1.24,>=1.21.1 (from requests>=2.9.1->b2)
  Downloading https://files.pythonhosted.org/packages/bd/c9/6fdd990019071a4a32a5e7cb78a1d92c53851ef4f56f62a3486e6a7d8ffb/urllib3-1.23-py2.py3-none-any.whl (133kB)
	100% || 143kB 5.9MB/s
Collecting idna<2.8,>=2.5 (from requests>=2.9.1->b2)
  Downloading https://files.pythonhosted.org/packages/4b/2a/0276479a4b3caeb8a8c1af2f8e4355746a97fab05a372e4a2c6a6b876165/idna-2.7-py2.py3-none-any.whl (58kB)
	100% || 61kB 7.4MB/s
Installing collected packages: six, python-dateutil, backports.functools-lru-cache, arrow, funcsigs, logfury, certifi, chardet, urllib3, idna, requests, tqdm, futures, b2
  Found existing installation: six 1.9.0
	Uninstalling six-1.9.0:
	  Successfully uninstalled six-1.9.0
  Running setup.py install for arrow ... done
  Found existing installation: chardet 2.2.1
	Uninstalling chardet-2.2.1:
	  Successfully uninstalled chardet-2.2.1
  Found existing installation: urllib3 1.10.2
	Uninstalling urllib3-1.10.2:
	  Successfully uninstalled urllib3-1.10.2
  Found existing installation: idna 2.4
	Uninstalling idna-2.4:
	  Successfully uninstalled idna-2.4
  Found existing installation: requests 2.6.0
Cannot uninstall 'requests'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
[root@hetzner2 sync]#                                                                                                          
  1. found the backend before it was integrated into duplicity https://github.com/matthewbentley/duplicity_b2
  2. I decided to just install from git (meh) https://www.backblaze.com/b2/docs/quick_command_line.html
[root@hetzner2 sandbox]# git clone https://github.com/Backblaze/B2_Command_Line_Tool.git
Cloning into 'B2_Command_Line_Tool'...
remote: Counting objects: 5605, done.
remote: Compressing objects: 100% (134/134), done.
remote: Total 5605 (delta 191), reused 188 (delta 129), pack-reused 5341
Receiving objects: 100% (5605/5605), 1.45 MiB | 2.36 MiB/s, done.
Resolving deltas: 100% (4005/4005), done.
[root@hetzner2 sandbox]# cd B2_Command_Line_Tool/
[root@hetzner2 B2_Command_Line_Tool]# python setup.py install
...
Finished processing dependencies for b2==1.2.1
[root@hetzner2 B2_Command_Line_Tool]# 
  1. that appears to have worked, but now it's asking me for a passphrase for pgp symmetric encryption. I want to use a key file as input, but all the pgp file options appear to be for asymmetric encryption via a public key specified by argument. that's not what I want!
[root@hetzner2 sandbox]# duplicity /var/tmp/deprecateHetzner1/sync/hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg b2://<obfuscate>:<obfuscated>@ose-server-backups/
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
GnuPG passphrase: 

Mon Jul 16, 2018

  1. hetzner responded to my query about the absolute paths for our addon domains. They concluded that none of the directories actually exist anymore. That's not good for data retention, but at least we don't have to worry about backing them up.
Dear Mr Altfield

The absolute path for your accounts public_html is: /usr/www/users/osemain

This is displayed as "/" in konsoleH's Document Root.

All Addon-Domains are listed below this path, assuming "/addon" as Document root, this would be "/usr/www/users/osemain/addon"

I regret to say, none of your document roots are existing anymore, you may deleted them. So please re-upload your data and re-set your document roots.


If we can be of any further assistance, please let us know.


Mit freundlichen Grüßen / Kind regards

Emanuel Wiesner
  1. we still do probably want to backup the home directories of these addon domains. logging into konsoleH & clicking on each addon domain, then Services -> Access Details -> Login gave username & passwords for login
    1. the 'addontest.opensourceecology.org' addon domain's user was 'addon'. It had a password, but the default shell was /bin/false. Therefore, ssh-ing in did not work on this account
user@ose:~$ ssh addon@dedi978.your-server.deaddon@dedi978.your-server.de's password: 
bin/false: No such file or directory
Connection to dedi978.your-server.de closed.
user@ose:~$
    1. the 'holla.opensourceecology.org' addon domain's user was 'oseholla'. This account's shell was also /bin/false.
    2. the 'irc.opensourceecology.org' addon domain's user was 'oseirc'. This account's shell was also /bin/false.
    3. the 'opensourcwarehouse.org' addon domain's user was 'openswh'. This account's shell was also /bin/false.
    4. the 'sandbox.opensourceecology.org' addon domain's user was 'sandbox'. This account's shell was also /bin/false.
    5. the 'survey.opensourceecology.org' addon domain's user was 'osesurv'. This account's shell is /bin/bash. It's the only one that I can actually log into, and it's the one that I already backed-up
  1. I scp'd the home dir backup of osesurv that I created July 11th to hetzner2:/var/tmp/deprecateHetzner1/osesurv/final_backup_before_hetzner1_deprecation_osesurv_20180711-165315_home.tar.bz2
  2. now I have all the backups of all home dirs, web roots, and databases stashed on hetzner2 ready for encrytion, metadata/cataloging & upload to glacier.
[root@hetzner2 deprecateHetzner1]# date
Mon Jul 16 18:51:55 UTC 2018
[root@hetzner2 deprecateHetzner1]# pwd
/var/tmp/deprecateHetzner1
[root@hetzner2 deprecateHetzner1]# ls -lahR
.:
total 32K
drwx------   8 root      root      4.0K Jul 16 18:29 .
drwxrwxrwt. 52 root      root      4.0K Jul 15 02:08 ..
drwxrwxr-x   2 maltfield maltfield 4.0K Jul 11 18:13 microft
drwxrwxr-x   2 maltfield maltfield 4.0K Jul 11 18:04 oseblog
drwxrwxr-x   2 maltfield maltfield 4.0K Jul  6 23:38 osecivi
drwxrwxr-x   2 maltfield maltfield 4.0K Jul  6 23:47 oseforum
drwxrwxr-x   2 maltfield maltfield 4.0K Jul 11 17:48 osemain
drwxr-xr-x   2 root      root      4.0K Jul 16 18:38 osesurv

./microft:
total 6.2G
drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 18:13 .
drwx------ 8 root      root      4.0K Jul 16 18:29 ..
-rw-r--r-- 1 maltfield maltfield 1.4G Jul 11 18:12 final_backup_before_hetzner1_deprecation_microft_20180706-234228_home.tar.bz2
-rw-r--r-- 1 maltfield maltfield 523K Jul 11 18:12 final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_db2.20180706-234228.sql.bz2
-rw-r--r-- 1 maltfield maltfield 1.3M Jul 11 18:12 final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_drupal1.20180706-234228.sql.bz2
-rw-r--r-- 1 maltfield maltfield 3.3G Jul 11 18:13 final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_wiki.20180706-234228.sql.bz2
-rw-r--r-- 1 maltfield maltfield 1.7G Jul 11 18:14 final_backup_before_hetzner1_deprecation_microft_20180706-234228_webroot.tar.bz2

./oseblog:
total 4.4G
drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 18:04 .
drwx------ 8 root      root      4.0K Jul 16 18:29 ..
-rw-r--r-- 1 maltfield maltfield 1.2G Jul 11 18:01 final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_home.tar.bz2
-rw-r--r-- 1 maltfield maltfield 135M Jul 11 18:01 final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_mysqldump-oseblog.20180706-234052.sql.bz2
-rw-r--r-- 1 maltfield maltfield 3.1G Jul 11 18:02 final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_webroot.tar.bz2

./osecivi:
total 15M
drwxrwxr-x 2 maltfield maltfield 4.0K Jul  6 23:38 .
drwx------ 8 root      root      4.0K Jul 16 18:29 ..
-rw-r--r-- 1 maltfield maltfield 2.3M Jul  6 23:38 final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2
-rw-r--r-- 1 maltfield maltfield 1.1M Jul  6 23:38 final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2
-rw-r--r-- 1 maltfield maltfield 173K Jul  6 23:38 final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2
-rw-r--r-- 1 maltfield maltfield  12M Jul  6 23:38 final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2

./oseforum:
total 955M
drwxrwxr-x 2 maltfield maltfield 4.0K Jul  6 23:47 .
drwx------ 8 root      root      4.0K Jul 16 18:29 ..
-rw-r--r-- 1 maltfield maltfield 853M Jul  6 23:47 final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
-rw-r--r-- 1 maltfield maltfield  46M Jul  6 23:47 final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
-rw-r--r-- 1 maltfield maltfield  57M Jul  6 23:47 final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2

./osemain:
total 3.3G
drwxrwxr-x 2 maltfield maltfield 4.0K Jul 11 17:48 .
drwx------ 8 root      root      4.0K Jul 16 18:29 ..
-rw-r--r-- 1 maltfield maltfield 2.9G Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_home.tar.bz2
-rw-r--r-- 1 maltfield maltfield 1.2M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-openswh.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield 187K Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_fef.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield 157K Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osesurv.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield   14 Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_website.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield 203M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osewiki.20180706-224656.sql.bz2
-rw-r--r-- 1 maltfield maltfield 212M Jul 11 17:48 final_backup_before_hetzner1_deprecation_osemain_20180706-224656_webroot.tar.bz2

./osesurv:
total 260K
drwxr-xr-x 2 root      root      4.0K Jul 16 18:38 .
drwx------ 8 root      root      4.0K Jul 16 18:29 ..
-rw-r--r-- 1 maltfield maltfield 252K Jul 16 18:37 final_backup_before_hetzner1_deprecation_osesurv_20180711-165315_home.tar.bz2
[root@hetzner2 deprecateHetzner1]# du -sh
15G	.
[root@hetzner2 deprecateHetzner1]# 
  1. I clicked around on the hetzner kosoleH wui to see if I could configure a firewall or shutdown the server. I found nothing. So I created a new support request with hetzner asking how I could configure a firewall that would drop all incoming & outgoing tcp & upp traffic except [a] ESTABLISHED/RELATED connections and [b] tcp traffic destined for port 222 (ssh).
  2. I re-purposed (with heavy modifications) the 'uploadToGlacier.sh' script as was used on dreamhost's hancock. It's now at hetzner2:/root/bin/uploadToGlacier.sh. First I'll run it for the backupDirs="var/tmp/deprecateHetzner1/osesurv". If that goes fine, then I'll run it with all the other dirs listed. This script should encrypt the archives, create a metadata fileList (also encrypted) and upload both to glacier for each of the "backupDirs" I list.
  3. the first upload to glacier appears to have been successful. I'll wait until I can sync the archive list in the glacier vault as a test to ensure that it absolutely worked before I use the script to upload the remaining archives later this week
[root@hetzner2 bin]# date
Tue Jul 17 00:35:46 UTC 2018
[root@hetzner2 bin]# ./uploadToGlacier.sh
+ backupDirs=/var/tmp/deprecateHetzner1/osesurv
+ syncDir=/var/tmp/deprecateHetzner1/sync/
+ encryptionKeyFilePath=/root/backups/ose-backups-cron.key
+ export AWS_ACCESS_KEY_ID=<obfuscated>
+ AWS_ACCESS_KEY_ID=<obfuscated>
+ export AWS_SECRET_ACCESS_KEY=<obfuscated>
+ AWS_SECRET_ACCESS_KEY=<obfuscated>
++ echo /var/tmp/deprecateHetzner1/osesurv
+ for dir in '$(echo $backupDirs)'
++ basename /var/tmp/deprecateHetzner1/osesurv
+ archiveName=hetzner1_final_backup_before_hetzner1_deprecation_osesurv
++ date -u --rfc-3339=seconds
+ timestamp='2018-07-17 00:35:48+00:00'
+ fileListFilePath=/var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt
+ archiveFilePath=/var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar
+ echo ================================================================================
+ echo 'This file is metadata for the archive '\hetzner1_final_backup_before_hetzner1_deprecation_osesurv'\. In it, we list all the files included in the compressed/encrypted archive (produced using '\ls -lahR /var/tmp/deprecateHetzner1/osesurv'\), including the files within the tarballs within the archive (produced using '\find /var/tmp/deprecateHetzner1/osesurv -type f -exec tar -tvf '\{}'\ \; '\)'
+ echo ''

+ echo 'This archive was made as a backup of the files and databases that were previously used on hetnzer1 prior to migrating to hetzner2 in 2018. Before we cancelled our contract for hetzner1, this backup was made & put on glacier for long-term storage in-case we learned later that we missed some content on the migration. Form more information, please see the following link:
		echo ' aws glacier.py letsencrypt uploadToGlacier.sh uploadToGlacier.sh.20180716.orig 'https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation
		echo
		echo  >> /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt
		echo ' - Michael Altfield Note: this file was generated at 2018-07-17 '00:35:48+00:00 >> /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt
		echo ================================================================================ >> /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt
		echo ############################# >> /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt
		echo #' 'ls -lahR' output follows
./uploadToGlacier.sh: line 43: maltfield@opensourceecology.org: No such file or directory
+ echo '#############################'
+ ls -lahR /var/tmp/deprecateHetzner1/osesurv
+ echo ================================================================================
+ echo '############################'
+ echo '# tarball contents follows #'
+ echo '############################'
+ find /var/tmp/deprecateHetzner1/osesurv -type f -exec tar -tvf '{}' ';'
+ echo ================================================================================
+ bzip2 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt
+ gpg --symmetric --cipher-algo aes --batch --passphrase-file /root/backups/ose-backups-cron.key /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2
+ rm /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt
rm: cannot remove ‘/var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt’: No such file or directory
+ /root/bin/glacier.py --region us-west-2 archive upload deleteMeIn2020 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg
+  0 -eq 0 
+ rm -f /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.fileList.txt.bz2.gpg
+ tar -cvf /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar /var/tmp/deprecateHetzner1/osesurv/
tar: Removing leading `/' from member names
/var/tmp/deprecateHetzner1/osesurv/
/var/tmp/deprecateHetzner1/osesurv/final_backup_before_hetzner1_deprecation_osesurv_20180711-165315_home.tar.bz2
+ gpg --symmetric --cipher-algo aes --batch --passphrase-file /root/backups/ose-backups-cron.key /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar
+ rm /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar
+ /root/bin/glacier.py --region us-west-2 archive upload deleteMeIn2020 /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg
+  0 -eq 0 
+ rm -f /var/tmp/deprecateHetzner1/sync//hetzner1_final_backup_before_hetzner1_deprecation_osesurv.tar.gpg
[root@hetzner2 bin]# 
  1. I initiated a `--wait`-ing vault sync with glacier.py
[root@hetzner2 sync]# glacier.py --region us-west-2 vault sync --wait deleteMeIn2020

  1. ..
  1. I did some digging into reviews comparing cloud storage providers against Backblaze B2. Namely AWS Glacier & s3. The issue with glacier & s3 is that they impose a minimum storage time. That totally breaks our delete-daily-uploads-after-3-days model, and makes a _huge_ difference in GB stored per month and, therefore, the annual bill. For amazon, we'd be charged for having 30 days * 15G = 450 GB minimum. For Backblaze, we'd be able to delete after 3 days, so we only have ~45G stored. Because Backblaze is only slightly more expensive than glacier, that could be a ~8-10x difference in cost!
    1. unfortunately, many of the reviews I found on Backblaze talked about them as an "unlimited storage for a single computer" service. I'm not sure this is the product that would apply to our server..
    2. https://www.cloudwards.net/azure-vs-amazon-s3-vs-google-vs-backblaze-b2/#one
      1. this one listed Amazon over B2 because of their security. We don't give a shit about their security because we're just encrypting it before uploading it anyway! TNO! B2 beat AWS regarding cost, with no major cons that I could see on this review.
    3. backblaze does have a CLI tool, but everything they offer appears to target windows & mac. There is no mention of linux support, but the github shows it's a python tool. Maybe it works in linux? https://github.com/Backblaze/B2_Command_Line_Tool
    4. ah, backblaze does reference linux support here https://help.backblaze.com/hc/en-us/articles/217664628-How-does-Backblaze-support-Linux-Users-
      1. they say duplicity supports backblaze. That's great! I wanted to use duplicity before. I'll have to look into this option more.
  2. I logged into the G Suite Admin = Gapps = https://admin.google.com
    1. confirmed that I now have more premissons since Marcin made me a Super Admin here
    2. I guess we're grandfathered-into a free plan (up to 200 licenses; currently we're at 33)
    3. I created a new user backblaze@opensourceecology.org. We'll use this to create a new account for backblaze.

Sat Jul 14, 2018

  1. Marcin granted me super admin rights on our Google Suite account (so I can whitelist our IPs for Gmail); I haven't tested this access yet
  2. Marcin mentioned that the STL files I produced for 3d printing parts from the Prusa lacked recessed nut catchers. We compared screenshots and his freecad showed the recesses where the nuts go while mine did not. This is a strange discrepancy which Marcin said should be resolved by everyone running oselinux. I'll have to setup an HVM for OSE Linux. I can't use it for 99% of my daily tasks as it lacks Persistence currently.
  3. I had several back-and-forth emails with Chris about enabling Persistence in OSE Linux. Progress is being made. Code is in github & documented on our wiki here https://wiki.opensourceecology.org/wiki/OSE_Linux_Persistence
  4. Hetzner got back to me about the addon domains document roots. They simply told me to check konsoleH. Indeed, the "Document root" _is_ listed when you click on each addon domain, but it's a useless string. I emailed back to them asking for them to either tell us the aboslute path to each of our 6x addon domains or for them to send me the entire contents of the /etc/httpd directory so I could figure it out myself (again, I don't have root on this old server)
Hi Bastian,

Can you please tell me the absolute path of each of our addon domains?

It looks like we have 6x addon domains under our 'osemain' account. As you suggested, I clicked on each of the domains in konsoleH and checked the string listed under their "Document Root" in konsleH. Here's the results:

	addontest.opensourceecology.org /addon-domains/addontest/
	holla.opensourceecology.org /addon-domains/holla
	irc.opensourceecology.org /addon-domains/irc/
	opensourcewarehouse.org /archive/addon-domains/opensou…
	sandbox.opensourceecology.org /addon-domains/sandbox
	survey.opensourceecology.org /addon-domains/survey/

Unfortunately, none of these paths are absolute paths. Therefore, they are ambiguous.

Assuming they are merely underneath the master account's docroot, I'd assume these document root directories would be relative to '/usr/home/osemain/public_html/'. However, most of these directories do not exist in that directory.

osemain@dedi978:~/public_html$ date
Sat Jul 14 22:54:52 CEST 2018
osemain@dedi978:~/public_html$ pwd
/usr/home/osemain/public_html
osemain@dedi978:~/public_html$ ls -lah
total 32K
drwx---r-x  5 osemain users   4.0K May 25 17:42 .
drwxr-x--x 14 root    root    4.0K Jul 12 14:29 ..
drwxr-xr-x 13 osemain osemain 4.0K Jun 20  2017 archive
-rwxr-xr-x  1 osemain osemain 1.9K Mar  1 20:31 .htaccess
drwxr-xr-x  2 osemain osemain 4.0K Sep 17  2017 logs
drwxr-xr-x 14 osemain osemain 4.0K Mar 31  2015 mediawiki-1.24.2.extra
-rw-r--r--  1 osemain osemain  526 Jun 19  2015 old.html
-rw-r--r--  1 osemain osemain  883 Jun 19  2015 oldu.html.done
osemain@dedi978:~/public_html$

There is no 'addon-domains' directory here. The only directory that matches the "Document root"s extracted from konsoleH as listed above is for 'opensourcewarehouse.org', which is listed as being inside a directory 'archive'. Unfortunately, I can't even see what that directory exactly is. The ellipsis (...) in "/archive/addon-domains/opensou…" is literally in the string that konsoleH gave me.

Can you please provide for me the _absolute_ path of the document roots for 6x the vhosts listed as "addon-domains" listed above? If you could attach the contents of the /etc/httpd directory, that would also be extremely helpful in figuring this information out myself.

Please provide for me the absolute paths of the 6x document roots of the "addon-domains".


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org
  1. Emailed Marcin about All Power Labs, a biomass generator company based in Berkeley & added a wiki article about them https://wiki.opensourceecology.org/wiki/AllPowerLabs

Thr Jul 12, 2018

  1. hetzner got back to me about adding the PTR = RDNS entry. They say I can self-service this request via robot "under the tab IP...click on the small plus symbol beside of the IPv6 subnet."
You set the RDNS entry yourself via robot. You can do it directly at the server under the tab IP. Please click on the small plus symbol beside of the IPv6 subnet.

Best regards

  Ralf Sager
  1. I found it: After logging in, click "Servers" (under the "Main Functions" header on the left), then clock on our server, then click the "IPs" tab (it was the first = default tab). Indeed, there is a very small plus symbol to the left of our ipv6 subnet = " 2a01:4f8:172:209e:: / 64". Clicking on that plus symbol opens a simple form asking for an "IP Address"+ "Reverse DNS entry".
    1. since we have a whole ipv6 subnet, it appears that we can have multiple entries here. I entered "2a01:4f8:172:209e::2" for the ip address (as this was what google reported to us) and "opensourceecology.org" for the "Reverse DNS entry".
    2. interestingly, there were no RDNS entries for the ipv4 addresses above. I set those to 'opensourceecology.org' as well.
    3. it worked immediately!
user@personal:~$ dig +short -x "2a01:4f8:172:209e::2"
opensourceecology.org.
user@personal:~$ 
    1. I emailed Ralf at hetzner back, asking if this self-servicability of setting the RDNS = PTR address was just as trivial for hetzner cloud nodes as it is for hetzner dedicated barmetal servers
  1. here's the whole PTR = RDS response using dig on our ipv6 address
user@personal:~$ dig -x "2a01:4f8:172:209e::2"

; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 2a01:4f8:172:209e::2
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7215
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 7

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. IN PTR

;; ANSWER SECTION:
2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. 5937 IN PTR opensourceecology.org.

;; AUTHORITY SECTION:
8.f.4.0.1.0.a.2.ip6.arpa. 5937	IN	NS	ns1.your-server.de.
8.f.4.0.1.0.a.2.ip6.arpa. 5937	IN	NS	ns.second-ns.com.
8.f.4.0.1.0.a.2.ip6.arpa. 5937	IN	NS	ns3.second-ns.de.

;; ADDITIONAL SECTION:
ns.second-ns.com.	5241	IN	A	213.239.204.242
ns.second-ns.com.	111071	IN	AAAA	2a01:4f8:0:a101::b:1
ns1.your-server.de.	84441	IN	A	213.133.106.251
ns1.your-server.de.	24671	IN	AAAA	2a01:4f8:d0a:2006::2
ns3.second-ns.de.	24672	IN	A	193.47.99.4
ns3.second-ns.de.	24671	IN	AAAA	2001:67c:192c::add:b3

;; Query time: 3 msec
;; SERVER: 10.137.2.1#53(10.137.2.1)
;; WHEN: Thu Jul 12 13:51:54 EDT 2018
;; MSG SIZE  rcvd: 358

user@personal:~$ 
  1. ...
  1. hetzner got back to me about the public_html directory being "permission denied" for the 'osesurv' user. They said that the document root is in the main user's public_html dir. I asked for them to tell me the absolute path to this dir, as I cannot check the apache config without root.
Dear Mr Altfield

all website data is always saved in the main account. Addon domains only use files from the main domains public_html folder.

If we can be of any further assistance, please let us know.


Mit freundlichen Grüßen / Kind regards

Jan Barnewold

Wed Jul 11, 2018

  1. hetzner got back to me, stating that I should go to "Services -> Login" in order to access the home directory of the 'osesurv' account (at /usr/home/osesurv)
Dear Mr Altfield

every addon domain has it's own home directory. The login details can be found under Services -> Login.

If you have any further questions, please feel free to contact me.
  1. I navigated to the addon domain in the hetzner wui konsoleh & to Services -> Login. I got a username & password. This let me ssh into the server as the user!
osesurv@dedi978:~$ date
Wed Jul 11 18:32:36 CEST 2018
osesurv@dedi978:~$ pwd
/usr/home/osesurv
osesurv@dedi978:~$ whoami
osesurv
osesurv@dedi978:~$ ls -lah
total 80K
drwx--x---  5 osesurv mail    4.0K Sep 21  2011 .
drwxr-x--x 14 root    root    4.0K Mar  9  2013 ..
-rw-r--r--  1 osesurv osesurv  220 Apr 10  2010 .bash_logout
-rw-r--r--  1 osesurv osesurv 3.2K Apr 10  2010 .bashrc
-rw-r--r--  1 osesurv osesurv   40 Sep 21  2011 .forward
-rw-r-----  1 osesurv mail    2.2K Sep 21  2011 passwd.cdb
-rw-r--r--  1 osesurv osesurv  675 Apr 10  2010 .profile
lrwxrwxrwx  1 root    root      23 Sep 21  2011 public_html -> ../../www/users/osesurv
-rw-r--r--  1 osesurv osesurv   40 Sep 21  2011 .qmail
-rw-r--r--  1 osesurv osesurv   25 Sep 21  2011 .qmail-default
drwxr-x---  2 osesurv osesurv 4.0K Sep 21  2011 .tmp
drwxrwxr-x  3 osesurv mail    4.0K Sep 21  2011 users
drwxr-xr-x  2 osesurv osesurv  36K Mar  6  2014 www_logs
osesurv@dedi978:~$ 
  1. none of these addon domains can have databases (I think), but it appears that I need to get all their home & web files
  2. began backing-up files on addon domain = osesurv
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*
  1. so ^ that failed. The home dir was accessible, but I'm getting a permission denied issue with the www dir linked to by public_html.
osesurv@dedi978:~$ # backup web root files
osesurv@dedi978:~$ time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*
tar: Removing leading `/' from member names
tar: /usr/www/users/osesurv/*: Cannot stat: Permission denied
tar: Exiting with failure status due to previous errors

real	0m0.013s
user	0m0.004s
sys	0m0.000s
osesurv@dedi978:~$ 
  1. I emailed hetzner back about this, asking how I can access this user's www dir
Hi Jan,

Thank you. I was able to ssh in, and I was able to access the user's
home directory. But I cannot access the user's www directory.

user@ose:~$ ssh osesurv@dedi978.your-server.de
osesurv@dedi978.your-server.de's password:
Last login: Wed Jul 11 18:30:46 2018 from 108.160.67.63
osesurv@dedi978:~$ date
Wed Jul 11 18:58:36 CEST 2018
osesurv@dedi978:~$ pwd
/usr/home/osesurv
osesurv@dedi978:~$ whoami
osesurv
osesurv@dedi978:~$ ls
noBackup  passwd.cdb  public_html  users  www_logs
osesurv@dedi978:~$ ls -lah public_html
lrwxrwxrwx 1 root root 23 Sep 21  2011 public_html ->
../../www/users/osesurv
osesurv@dedi978:~$ ls -lah public_html/
ls: cannot open directory public_html/: Permission denied
osesurv@dedi978:~$ ls -lah ../../www/users/osesurv
ls: cannot open directory ../../www/users/osesurv: Permission denied
osesurv@dedi978:~$ ls -lah /usr/www/users/osesurv/
ls: cannot open directory /usr/www/users/osesurv/: Permission denied
osesurv@dedi978:~$

Note that I also cannot access this directory from the 'osemain' user
under which the addon domain 'osesurv' exists:

osemain@dedi978:~$ date
Wed Jul 11 19:02:03 CEST 2018
osemain@dedi978:~$ whoami
osemain
osemain@dedi978:~$ ls -lah /usr/www/users/osesurv
ls: cannot access /usr/www/users/osesurv/.: Permission denied
ls: cannot access /usr/www/users/osesurv/..: Permission denied
total 0
d????????? ? ? ? ?            ? .
d????????? ? ? ? ?            ? ..
osemain@dedi978:~$

Can you please tell me how I can access the files in
'/usr/www/users/osesurv/'? Is it possible to do so over ssh?


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org
  1. ...
  1. I went to check to see if the PTR dns entry was in-place for a reverse lookup of our ipv6 address that I created yesterday. Unfortunately, there's no change
user@personal:~$ dig +short -x 138.201.84.243
static.243.84.201.138.clients.your-server.de.
user@personal:~$ dig +short -x "2a01:4f8:172:209e::2"
user@personal:~$ 
  1. here's the long results
user@personal:~$ date
Wed Jul 11 13:16:16 EDT 2018
user@personal:~$ dig -x 138.201.84.243

; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 138.201.84.243
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35146
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 7

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;243.84.201.138.in-addr.arpa.	IN	PTR

;; ANSWER SECTION:
243.84.201.138.in-addr.arpa. 86108 IN	PTR	static.243.84.201.138.clients.your-server.de.

;; AUTHORITY SECTION:
84.201.138.in-addr.arpa. 86108	IN	NS	ns3.second-ns.de.
84.201.138.in-addr.arpa. 86108	IN	NS	ns1.your-server.de.
84.201.138.in-addr.arpa. 86108	IN	NS	ns.second-ns.com.

;; ADDITIONAL SECTION:
ns.second-ns.com.	4381	IN	A	213.239.204.242
ns.second-ns.com.	169981	IN	AAAA	2a01:4f8:0:a101::b:1
ns1.your-server.de.	83581	IN	A	213.133.106.251
ns1.your-server.de.	83581	IN	AAAA	2a01:4f8:d0a:2006::2
ns3.second-ns.de.	83581	IN	A	193.47.99.4
ns3.second-ns.de.	83581	IN	AAAA	2001:67c:192c::add:b3

;; Query time: 6 msec
;; SERVER: 10.137.2.1#53(10.137.2.1)
;; WHEN: Wed Jul 11 13:16:22 EDT 2018
;; MSG SIZE  rcvd: 322

user@personal:~$ dig -x "2a01:4f8:172:209e::2"

; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -x 2a01:4f8:172:209e::2
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57144
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.9.0.2.2.7.1.0.8.f.4.0.1.0.a.2.ip6.arpa. IN PTR

;; AUTHORITY SECTION:
8.f.4.0.1.0.a.2.ip6.arpa. 6890	IN	SOA	ns1.your-server.de. postmaster.your-server.de. 2018084081 86400 1800 3600000 86400

;; Query time: 3 msec
;; SERVER: 10.137.2.1#53(10.137.2.1)
;; WHEN: Wed Jul 11 13:16:28 EDT 2018
;; MSG SIZE  rcvd: 166

user@personal:~$ 
  1. if we encounter these errors again, I think we'll have to contact hetzner to create these PTR entries for the ipv6 addresses. I don't think I have the ability to do this from our server or from our nameserver at cloudflare
  2. hetzner has an article on this issue, but they merely state to contact their support team https://hetzner.co.za/help-centre/domains/ptr/
  3. I went ahead and contacted hetzner (via our robot portal for hetnzer2--distinct from hetzner1's konsoleh) asking them to create the PTR record for our ipv6 addresses. And I asked them if this is something I could do myself or if it necessarily required a change on their end.
  4. note that this may not be a serviceable request for some types of accounts at hetzner, and it is a valid concern for moving from a dedicated barmetal server to other types of accounts, such as a cloud server. I documented this concern in the "looking forward" section on the OSE Server article https://wiki.opensourceecology.org/wiki/OSE_Server#Non-dedicated_baremetal_concerns
  1. ...
  1. while I wait for hetzner support's response for how to access all the files for the addon domains, I'll copy the finished backups from the other 5x domains (as opposed to addon domains) to hetnzer2 (osemain, osecivi, oseblog, oseforum, and microft), staging them for upload to glacier
    1. osemain's backups (after compression) came to a total of 3.3G
osemain@dedi978:~$ date
Wed Jul 11 19:42:29 CEST 2018
osemain@dedi978:~$ pwd
/usr/home/osemain
osemain@dedi978:~$ whoami
osemain
osemain@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/*
2.9G	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_home.tar.bz2
1.2M	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-openswh.20180706-224656.sql.bz2
192K	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_fef.20180706-224656.sql.bz2
164K	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osesurv.20180706-224656.sql.bz2
4.0K	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-ose_website.20180706-224656.sql.bz2
204M	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_mysqldump-osewiki.20180706-224656.sql.bz2
212M	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/final_backup_before_hetzner1_deprecation_osemain_20180706-224656_webroot.tar.bz2
osemain@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/
3.3G	noBackup/final_backup_before_hetzner1_deprecation_osemain_20180706-224656/
osemain@dedi978:~$ 
    1. osecivi's backups (after compression) came to a total of 15M
osecivi@dedi978:~$ date
Wed Jul 11 19:49:44 CEST 2018
osecivi@dedi978:~$ pwd
/usr/home/osecivi
osecivi@dedi978:~$ whoami
osecivi
osecivi@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/*
2.3M    noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2
1.1M    noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2
180K    noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2
12M     noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2
osecivi@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/
15M     noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/
osecivi@dedi978:~$ 
    1. oseblog's backups (after compression) came to a total of 4.4G
oseblog@dedi978:~$ date
Wed Jul 11 19:58:51 CEST 2018
oseblog@dedi978:~$ pwd
/usr/home/oseblog
oseblog@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/*
1.3G    noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_home.tar.bz2
135M    noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_mysqldump-oseblog.20180706-234052.sql.bz2
3.1G    noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052_webroot.tar.bz2
oseblog@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/
4.4G    noBackup/final_backup_before_hetzner1_deprecation_oseblog_20180706-234052/
oseblog@dedi978:~$ 
    1. oseforum's backups (after compression) came to a total of 956M
oseforum@dedi978:~$ date
Wed Jul 11 20:02:04 CEST 2018
oseforum@dedi978:~$ pwd
/usr/home/oseforum
oseforum@dedi978:~$ whoami
oseforum
oseforum@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/*
854M	noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
46M	noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
57M	noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2
oseforum@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/
956M	noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/
oseforum@dedi978:~$ 
    1. microft's backups (after compression) came to a total of 6.2G
microft@dedi978:~$ date
Wed Jul 11 20:06:19 CEST 2018
microft@dedi978:~$ pwd
/usr/home/microft
microft@dedi978:~$ whoami
microft
microft@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/*
1.4G	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_home.tar.bz2
528K	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_db2.20180706-234228.sql.bz2
1.3M	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_drupal1.20180706-234228.sql.bz2
3.3G	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_mysqldump-microft_wiki.20180706-234228.sql.bz2
1.7G	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/final_backup_before_hetzner1_deprecation_microft_20180706-234228_webroot.tar.bz2
microft@dedi978:~$ du -sh noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/
6.2G	noBackup/final_backup_before_hetzner1_deprecation_microft_20180706-234228/
microft@dedi978:~$ 
  1. therefore, the total for the 5x domains (exculding addon domains) dropped from ~34.87G before compression to 14.871G after compression
    1. that's a totally reasonable size to backup. In fact, I think I'll leave some of these backups live on hetzner2. I should definitely do so for the forum, in-case we ever want to make that site rw again.
    2. I went ahead and created the "hot" backup of the oseforums in the cooresponding apache dir
[root@hetzner2 forum.opensourceecology.org]# date
Wed Jul 11 19:16:55 UTC 2018
[root@hetzner2 forum.opensourceecology.org]# pwd
/var/www/html/forum.opensourceecology.org
[root@hetzner2 forum.opensourceecology.org]# du -sh *
955M    final_backup_before_hetzner1_deprecation_oseforum_20180706-230007
2.7G    htdocs
4.0K    readme.txt
173M    vanilla_docroot_backup.20180113
[root@hetzner2 forum.opensourceecology.org]# du -sh final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/*
853M    final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
46M     final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
57M     final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2
[root@hetzner2 forum.opensourceecology.org]# 
    1. I created a readme.txt explaining what happened for the future sysadmin
[root@hetzner2 forum.opensourceecology.org]# cat readme.txt 
In 2018, the forums were no longer moderated or maintained, and the decision was made to deprecate support for the site. The content is still accessible in as static-content; new content is not possible.

For more information, please see:

 * https://wiki.opensourceecology.org/wiki/CHG-2018-02-04

On 2018-07-11, during the backup stage of the change to deprecate hetzner1, a backup of the vanilla forums home directory, webroot directory, and database dump were created for upload to long-term backup storage on hetzner1. Because this backup size was manageable small (1G, which is actually smaller than the 2.7G of static content currently live in the forum's docroot), I put a "hot" copy of this dump in the forum's apache dir (but outside the htdocs root, of course) located at hetzner2:/var/www/html/forum.opensourceecology.org/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/

 * https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation

-- Michael Altfield <michael@opensourceecology.org> 2018-07-11
[root@hetzner2 forum.opensourceecology.org]# 
    1. and, finally, I updated the relevant wiki articles for the forums
      1. https://wiki.opensourceecology.org/wiki/CHG-2018-02-04
      2. https://wiki.opensourceecology.org/wiki/Vanilla_Forums
      3. https://wiki.opensourceecology.org/wiki/OSE_Forum
  1. I scp'd all these tarballs to hetzner2
[root@hetzner2 deprecateHetzner1]# date
Wed Jul 11 18:17:14 UTC 2018
[root@hetzner2 deprecateHetzner1]# pwd
/var/tmp/deprecateHetzner1
[root@hetzner2 deprecateHetzner1]# du -sh /var/tmp/deprecateHetzner1/*
6.2G    /var/tmp/deprecateHetzner1/microft
4.4G    /var/tmp/deprecateHetzner1/oseblog
15M     /var/tmp/deprecateHetzner1/osecivi
955M    /var/tmp/deprecateHetzner1/oseforum
3.3G    /var/tmp/deprecateHetzner1/osemain
[root@hetzner2 deprecateHetzner1]# du -sh /var/tmp/deprecateHetzner1/
15G     /var/tmp/deprecateHetzner1/
[root@hetzner2 deprecateHetzner1]# 
  1. I still need to generate the metadata files that explains what these tarballs hold with a message + file list (`tar -t`). This will also hopefully serve as a test to validate that the files were not corrupted in-transit during the scp or tar creation. I generally prefer rsync so I can double-tap, but I has some issues with ssh key auth with rsync (so I just used scp, which auth'd fine).
  2. of course, I also still need to generate the backup tarballs for the addon domains after hetzner gets back to me on how to access their web roots.
  1. ...
  1. I began looking back at the hancock:/home/marcin_ose/backups/uploadToGlacier.sh file that I used back in March to generate metadata files for each of the encrypted tarballs dumped onto glacier https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q1#Sat_Mar_31.2C_2018
hancock% cat uploadToGlacier.sh 
#!/bin/bash -x

############
# SETTINGS #
############

#backupDirs="hetzner2/20171101-072001"
#backupDirs="hetzner1/20170901-052001"
#backupDirs="hetzner1/20171001-052001"
#backupDirs="hetzner1/20171101-062001 hetzner1/20171201-062001"
#backupDirs="hetzner1/20171201-062001"
backupDirs="hetzner2/20170702-052001 hetzner2/20170801-072001 hetzner2/20170901-072001 hetzner2/20171001-072001 hetzner2/20171101-072001 hetzner2/20171202-072001 hetzner2/20180102-072001 hetzner2/20180202-072001 hetzner2/20180302-072001 hetzner2/20180401-072001 hetzner1/20170701-052001 hetzner1/20170801-052001 hetzner1/20180101-062001 hetzner1/20180201-062001 hetzner1/20180301-062002 hetzner1/20180401-052001"
syncDir="/home/marcin_ose/backups/uploadToGlacier"
encryptionKeyFilePath="/home/marcin_ose/backups/ose-backups-cron.key"

export AWS_ACCESS_KEY_ID='<obfuscated>'
export AWS_SECRET_ACCESS_KEY='<obfuscated>'

##############
# DO UPLOADS #
##############

for dir in $(echo $backupDirs); do

	archiveName=`echo ${dir} | tr '/' '_'`;
	timestamp=`date -u --rfc-3339=seconds`
	fileListFilePath="${syncDir}/${archiveName}.fileList.txt"
	archiveFilePath="${syncDir}/${archiveName}.tar"

	#########################
	# archive metadata file #
	#########################
	
	# first, generate a file list to help the future sysadmin get metadata about the archvie without having to download the huge archive itself
	echo "================================================================================" > "${fileListFilePath}"
	echo "This file is metadata for the archive '${archiveName}'. In it, we list all the files included in the compressed/encrypted archive (produced using 'ls -lahR ${dir}'), including the files within the tarballs within the archive (produced using 'find "${dir}" -type f -exec tar -tvf '{}' \; ')" >> "${fileListFilePath}"
	echo "" >> "${fileListFilePath}"
	echo " - Michael Altfield <maltfield@opensourceecology.org>" >> "${fileListFilePath}"
	echo "" >> "${fileListFilePath}"
	echo " Note: this file was generated at ${timestamp}" >> "${fileListFilePath}"
	echo "================================================================================" >> "${fileListFilePath}"
	echo "#############################" >> "${fileListFilePath}"
	echo "# 'ls -lahR' output follows #" >> "${fileListFilePath}"
	echo "#############################" >> "${fileListFilePath}"
	ls -lahR ${dir} >> "${fileListFilePath}"
	echo "================================================================================" >> "${fileListFilePath}"
	echo "############################" >> "${fileListFilePath}"
	echo "# tarball contents follows #" >> "${fileListFilePath}"
	echo "############################" >> "${fileListFilePath}"
	find "${dir}" -type f -exec tar -tvf '{}' \; >> "${fileListFilePath}"
	echo "================================================================================" >> "${fileListFilePath}"

	# compress the metadata file
	bzip2 "${fileListFilePath}"

	# encrypt the metadata file
	#gpg --symmetric --cipher-algo aes --armor --passphrase-file "${encryptionKeyFilePath}" "${fileListFilePath}.bz2"
	gpg --symmetric --cipher-algo aes --passphrase-file "${encryptionKeyFilePath}" "${fileListFilePath}.bz2"

	# delete the unencrypted archive
	rm "${fileListFilePath}"

	# upload it
	#/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}_metadata.txt.bz2.asc: this is a metadata file showing the file and dir list contents of the archive of the same name" --body "${fileListFilePath}.bz2.asc"
	#/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}_metadata.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same name" --body "${fileListFilePath}.bz2.gpg"
	/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 "${fileListFilePath}.bz2.gpg"

	if  $? -eq 0 ; then
		rm -f "${fileListFilePath}.bz2.gpg"
	fi

	################
	# archive file #
	################

	# generate archive file as a single, compressed file
	tar -cvf "${archiveFilePath}" "${dir}/"

	# encrypt the archive file
	#gpg --symmetric --cipher-algo aes --armor --passphrase-file "${encryptionKeyFilePath}" "${archiveFilePath}"
	gpg --symmetric --cipher-algo aes --passphrase-file "${encryptionKeyFilePath}" "${archiveFilePath}"

	# delete the unencrypted archive
	rm "${archiveFilePath}"

	# upload it
	#/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}.tar.gz.asc: this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates" --body "${archiveFilePath}.asc"
	#/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description "${archiveName}.tar.gz.gpg: this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates" --body "${archiveFilePath}.gpg"
	/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 "${archiveFilePath}.gpg"

	if  $? -eq 0 ; then
		rm -f "${archiveFilePath}.gpg"
	fi

done
hancock% 

Tue Jul 10, 2018

  1. hetzner got back to me as expected stating that it's an addon domain. It's hard to convey via email (plus through a language barrier) that I'm aware of the fact that there's an addon domain of the same name = survey. But there is a distinct directory & user unlike other addon domains on the physical server that I cannot access. I'm assuming it was previously used as a non-addon domain, then an addon domain was created. Or something. In any case, there is an actual directory '/usr/home/osesurv' that I need to access. I replied to them asking for an `ls -lah /usr/home/osesurv` to be sent to me.
  2. Marcin forwarded an error report from google's webmaster tools. It showed 1 issues; I'm not concerned. It has a lot of false-positives (special pages, robots, etc)
  3. Marcin sent me emails about two users who have not received emails (containing their temp password) after registering for an account on the wiki
    1. Miles Ransaw <milesransaw@gmail.com>
    2. Harman Bains <bains.hmn@gmail.com>
    3. this is extremely frustrating, as it appears that mediawiki does send emails most of the time, but occasionally users complain that emails do not come in (even after checking the spam folder). I can't find a way to reproduce this issue, so what I really need to do is find some logs containing the user's names/emails above.
    4. did some digging around the source code & confirmed that mediawiki falls-back on using the mail() function of php in includes/mail/UserMailer.php:sendInternal(). It looks like it also has support to use the PEAR mailer as well. There's a debug line that indicates when it has to fall-back to mail().
	 if ( !stream_resolve_include_path( 'Mail/mime.php' ) ) {                                                                                         
			wfDebug( "PEAR Mail_Mime package is not installed. Falling back to text email.\n" );
  1. checking with `pear list`, it looks like we don't have PEAR Mail_Mime installed
[root@hetzner2 ~]# pear list
...
INSTALLED PACKAGES, CHANNEL PEAR.PHP.NET:
=========================================
PACKAGE          VERSION STATE
Archive_Tar      1.4.2   stable
Console_Getopt   1.4.1   stable
PEAR             1.10.4  stable
Structures_Graph 1.1.1   stable
XML_Util         1.4.2   stable
[root@hetzner2 ~]# 
  1. I even checked to see if it was a file within the "PEAR" package. It isn't there.
[root@hetzner2 ~]# pear list-files PEAR | grep -i mail
PHP Warning:  ini_set() has been disabled for security reasons in /usr/share/pear/pearcmd.php on line 30
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
[root@hetzner2 ~]# pear list-files PEAR | grep -i mime
PHP Warning:  ini_set() has been disabled for security reasons in /usr/share/pear/pearcmd.php on line 30
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
PHP Warning:  php_uname() has been disabled for security reasons in /usr/share/pear/PEAR/Registry.php on line 813
[root@hetzner2 ~]# 
  1. compare this to our old server, and we see a discrepancy! The old server has this module. Perhaps this is the issue?
osemain@dedi978:~$ pear list
Installed packages, channel pear.php.net:
=========================================
Package          Version State
Archive_Tar      1.4.3   stable
Console_Getopt   1.4.1   stable
DB               1.7.14  stable
Date             1.4.7   stable
File             1.3.0   stable
HTTP             1.4.1   stable
HTTP_Request     1.4.4   stable
Log              1.12.8  stable
MDB2             2.5.0b5 beta
Mail             1.2.0   stable
Mail_Mime        1.8.9   stable
Mail_mimeDecode  1.5.5   stable
Net_DIME         1.0.2   stable
Net_IPv4         1.3.4   stable
Net_SMTP         1.6.2   stable
Net_Socket       1.0.14  stable
Net_URL          1.0.15  stable
PEAR             1.10.5  stable
SOAP             0.13.0  beta
Structures_Graph 1.1.1   stable
XML_Parser       1.3.4   stable
XML_Util         1.4.2   stable
osemain@dedi978:~$ 
  1. a quick yum search shows the package (we don't fucking want to use the pear package manager)
[root@hetzner2 ~]# yum search pear | grep -i mail
php-channel-swift.noarch : Adds swift mailer project channel to PEAR
php-pear-Mail.noarch : Class that provides multiple interfaces for sending
					 : emails
php-pear-Mail-Mime.noarch : Classes to create MIME messages
php-pear-Mail-mimeDecode.noarch : Class to decode mime messages
[root@hetzner2 ~]# 
  1. so I _could_ install this, but I really want to develop some test that proves it doesn't work, then install. Then re-test & confirm it's fixed.
  2. it looks like we can trigger Mediawiki sending a user an email via this Special:EmailUser page https://wiki.opensourceecology.org/index.php?title=Special:EmailUser/
    1. well, I did that, and the email went through. It came from 'contact@opensourceecology.org'
    2. unfortunately, it looks like wiki-error.log hasn't populated since Jun 18th. I was monitoring the other logs when I triggered the EmailUser above; it was shown in access_log, and nothing came into error_log
  3. I changed the permissions of this wiki-error.log file to "not-apache:apache-admins" and 0660, and it began writing
    1. hopefully this won't fill the disk! iirc, mediawiki has some mechanism to prevent infinite growth..
  4. I re-triggered the email, but (surprisingly), I saw very little Mail-related info in the wiki-error.log file, except
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org>
Sending mail via internal mail() function
MediaWiki::preOutputCommit: primary transaction round committed
MediaWiki::preOutputCommit: pre-send deferred updates completed
MediaWiki::preOutputCommit: LBFactory shutdown completed
User: loading options for user 3709 from override cache.
OutputPage::sendCacheControl: private caching;  **
[error] [W0U@lnhB25P5rXweNx8gYQAAAAs] /wiki/Special:EmailUser   ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons
  1. that "ini_set() has been disabled for security reasons" occurs all the time; it shouldn't be an issue. Indeed, the email came through. What I was expecting to see was the "PEAR Mail_Mime package is not installed. Falling back to text email" message. It didn't appear.
  2. I opened the wiki-error.log file in vim, and then I did find this:
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org>                                                                                                                             
Sending mail via internal mail() function
  1. whatever, different logic location. that's a good enough test. let me try to install the pear module, retry the email send. If everything still works, I'll ask the users to try again. Maybe that will just fix it. In any case, it appears that having pear may make it easier to debug.
[root@hetzner2 wiki.opensourceecology.org]# yum install php-pear-Mail-Mime
...
Installed:
  php-pear-Mail-Mime.noarch 0:1.10.2-1.el7                                                                                                                                                              

Complete!
[root@hetzner2 wiki.opensourceecology.org]#
  1. I re-triggered the email to send. It came in, and the log still says it's using the 'mail() function
[root@hetzner2 wiki.opensourceecology.org]# tail -f wiki-error.log  | grep -C3 -i mail

[DBConnection] Closing connection to database 'localhost'.
[DBConnection] Closing connection to database 'localhost'.
IP: 127.0.0.1
Start request POST /wiki/Special:EmailUser
HTTP HEADERS:
X-REAL-IP: 104.51.202.137
X-FORWARDED-PROTO: https
--
ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
ACCEPT-LANGUAGE: en-US,en;q=0.8
ACCEPT-ENCODING: gzip, deflate, br
REFERER: https://wiki.opensourceecology.org/wiki/Special:EmailUser
COOKIE: donot=cacheme; osewiki_db_wiki__session=tglqeps7foc00ah128mjrap1la56qdlj; osewiki_db_wiki_UserID=3709; osewiki_db_wiki_UserName=Maltfield
DNT: 1
UPGRADE-INSECURE-REQUESTS: 1
--
	"ChronologyProtection": false
}
[DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection.
[error] [W0VBqahT@xRKhqoY@veLxgAAAAU] /wiki/Special:EmailUser   ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons
#0 [internal function]: MWExceptionHandler::handleError(integer, string, string, integer, array)
#1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(693): ini_set(string, string)
#2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(129): Wikimedia\Rdbms\Database->installErrorHandler()
--
#10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(777): wfGetDB(integer)
#11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(396): User::idFromName(string, integer)
#12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/user/User.php(2230): User->load()
#13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(223): User->getId()
#14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(205): SpecialEmailUser::validateTarget(User, User)
#15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(47): SpecialEmailUser::getTarget(string, User)
#16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPage.php(488): SpecialEmailUser->getDescription()
#17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialEmailuser.php(116): SpecialPage->setHeaders()
#18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPage.php(522): SpecialEmailUser->execute(NULL)
#19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPageFactory.php(578): SpecialPage->run(NULL)
#20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(287): SpecialPageFactory::executePath(Title, RequestContext)
#21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(851): MediaWiki->performRequest()
--
User::getBlockedStatus: checking...
User: loading options for user 3709 from override cache.
User: loading options for user 3709 from override cache.
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org>
Sending mail via internal mail() function
MediaWiki::preOutputCommit: primary transaction round committed
MediaWiki::preOutputCommit: pre-send deferred updates completed
MediaWiki::preOutputCommit: LBFactory shutdown completed
User: loading options for user 3709 from override cache.
OutputPage::sendCacheControl: private caching;  **
[error] [W0VBqahT@xRKhqoY@veLxgAAAAU] /wiki/Special:EmailUser   ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons
#0 [internal function]: MWExceptionHandler::handleError(integer, string, string, integer, array)
#1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(693): ini_set(string, string)
#2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(129): Wikimedia\Rdbms\Database->installErrorHandler()
  1. I'm tired of these errors; I commented out line 693 in includes/libs/rdbms/database/Database.php:installErrorHandler()
   /**                                                                                                                                                                                                  
	* Set a custom error handler for logging errors during database connection                                                                                                                          
	*/                                                                                                                                                                                                  
   protected function installErrorHandler() {                                                                                                                                                           
	  $this->mPHPError = false;                                                                                                                                                                         
	  #$this->htmlErrors = ini_set( 'html_errors', '0' );                                                                                                                                               
	  set_error_handler( [ $this, 'connectionErrorLogger' ] );
}     
  1. that cleans up the output at least
[root@hetzner2 wiki.opensourceecology.org]# tail -f wiki-error.log  | grep -C3 -i mail

[DBConnection] Closing connection to database 'localhost'.
[DBConnection] Closing connection to database 'localhost'.
IP: 127.0.0.1
Start request POST /wiki/Special:EmailUser
HTTP HEADERS:
X-REAL-IP: 104.51.202.137
X-FORWARDED-PROTO: https
--
ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
ACCEPT-LANGUAGE: en-US,en;q=0.8
ACCEPT-ENCODING: gzip, deflate, br
REFERER: https://wiki.opensourceecology.org/wiki/Special:EmailUser
COOKIE: donot=cacheme; osewiki_db_wiki__session=tglqeps7foc00ah128mjrap1la56qdlj; osewiki_db_wiki_UserID=3709; osewiki_db_wiki_UserName=Maltfield
DNT: 1
UPGRADE-INSECURE-REQUESTS: 1
--
User::getBlockedStatus: checking...
User: loading options for user 3709 from override cache.
User: loading options for user 3709 from override cache.
UserMailer::send: sending mail to Maltfield <michael@opensourceecology.org>
Sending mail via internal mail() function
MediaWiki::preOutputCommit: primary transaction round committed
MediaWiki::preOutputCommit: pre-send deferred updates completed
MediaWiki::preOutputCommit: LBFactory shutdown completed
  1. curiously, as a test per this article, I wrote a simple test.php script to mail() myself something; it failed https://www.mediawiki.org/wiki/Manual:$wgEnableEmail
[root@hetzner2 mail]# cat /var/www/html/wiki.opensourceecology.org/htdocs/test.php 
<?php
# This is just a test to debug email issues; please delete this file
# --Michael Altfield <michael@opensourceecology.org> 2018-07-10

# we set a cookie to prevent varnish from caching this page
header( "Set-Cookie: donot=cacheme" );

mail( "michael@opensourceecology.org", "my subject", "my message body" );
?>
[root@hetzner2 mail]# 
  1. while tailing the maillog, I see this when I trigger my test script
[root@hetzner2 mail]# tail -f /var/log/maillog
Jul 10 23:41:16 hetzner2 postfix/pickup[11033]: DCFA7681EA4: uid=48 from=<apache>
Jul 10 23:41:16 hetzner2 postfix/cleanup[23835]: DCFA7681EA4: message-id=<20180710234116.DCFA7681EA4@hetzner2.opensourceecology.org>
Jul 10 23:41:16 hetzner2 postfix/qmgr[1631]: DCFA7681EA4: from=<apache@hetzner2.opensourceecology.org>, size=412, nrcpt=1 (queue active)
Jul 10 23:41:17 hetzner2 postfix/smtp[23837]: DCFA7681EA4: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[108.177.119.27]:25, delay=0.27, delays=0.02/0/0.05/0.2, dsn=2.0.0, status=sent (250 2.0.0 OK 1531266077 s19-v6si3778539edc.383 - gsmtp)
Jul 10 23:41:17 hetzner2 postfix/qmgr[1631]: DCFA7681EA4: removed
  1. but I see this when I load the mediawiki EmailUser page
Jul 10 23:42:06 hetzner2 postfix/pickup[11033]: 43A7F681EA4: uid=48 from=<contact@opensourceecology.org>
Jul 10 23:42:06 hetzner2 postfix/cleanup[23835]: 43A7F681EA4: message-id=<osewiki_db-wiki_.5b45444e401807.54864375@wiki.opensourceecology.org>
Jul 10 23:42:06 hetzner2 postfix/qmgr[1631]: 43A7F681EA4: from=<contact@opensourceecology.org>, size=983, nrcpt=1 (queue active)
Jul 10 23:42:06 hetzner2 postfix/smtp[23837]: 43A7F681EA4: to=<michael@opensourceecology.org>, relay=aspmx.l.google.com[64.233.167.26]:25, delay=0.29, delays=0.01/0/0.07/0.22, dsn=2.0.0, status=sent (250 2.0.0 OK 1531266126 j140-v6si391690wmd.76 - gsmtp)
Jul 10 23:42:06 hetzner2 postfix/qmgr[1631]: 43A7F681EA4: removed
  1. so further research & digging into the code suggests that the PEAR module is only used if we want to use an external SMTP server. We don't; we want to use our local stmp server. The default is to use mail(). Since the $wgSMTP var wasn't set on the old server, the old server should have also been using mail() https://www.mediawiki.org/wiki/Manual:$wgSMTP
  2. finally, I decided to grep the maillog for one of the users = milesransaw@gmail.com. I got an error that appears to have come from Google regarding "IPv6 sending guidlines"
[root@hetzner2 htdocs]# grep -irC5 'milesransaw@gmail.com' /var/log
/var/log/maillog-20180710-Jul  9 22:21:08 hetzner2 postfix/scache[24613]: statistics: address lookup hits=0 miss=4 success=0%
/var/log/maillog-20180710-Jul  9 22:21:08 hetzner2 postfix/scache[24613]: statistics: max simultaneous domains=1 addresses=2 connection=2
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/pickup[24510]: 5039E681DE9: uid=48 from=<contact@opensourceecology.org>
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/cleanup[25828]: 5039E681DE9: message-id=<osewiki_db-wiki_.5b43e0dd4ba6c0.09234917@wiki.opensourceecology.org>
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/qmgr[1631]: 5039E681DE9: from=<contact@opensourceecology.org>, size=1236, nrcpt=1 (queue active)
/var/log/maillog-20180710:Jul  9 22:25:33 hetzner2 postfix/smtp[25830]: 5039E681DE9: to=<milesransaw@gmail.com>, relay=gmail-smtp-in.l.google.com[2a00:1450:400c:c0c::1b]:25, delay=0.36, delays=0.02/0/0.05/0.3, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[2a00:1450:400c:c0c::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . c19-v6si14396804wrc.112 - gsmtp (in reply to end of DATA command))
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/cleanup[25828]: A808D681EA4: message-id=<20180709222533.A808D681EA4@hetzner2.opensourceecology.org>
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/bounce[25832]: 5039E681DE9: sender non-delivery notification: A808D681EA4
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/qmgr[1631]: A808D681EA4: from=<>, size=3974, nrcpt=1 (queue active)
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/qmgr[1631]: 5039E681DE9: removed
/var/log/maillog-20180710-Jul  9 22:25:33 hetzner2 postfix/smtp[25830]: A808D681EA4: to=<contact@opensourceecology.org>, relay=aspmx.l.google.com[2a00:1450:400c:c0c::1b]:25, delay=0.15, delays=0/0/0.06/0.08, dsn=5.7.1, status=bounced (host aspmx.l.google.com[2a00:1450:400c:c0c::1b] said: 550-5.7.1 [2a01:4f8:172:209e::2] Our system has detected that this message does 550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and 550-5.7.1 authentication. Please review 550-5.7.1  https://support.google.com/mail/?p=IPv6AuthError for more information 550 5.7.1 . s9-v6si13928170wrm.364 - gsmtp (in reply to end of DATA 
  1. I did another search for the other user = bains.hmn@gmail.com. Interestingly, I got no results at all this time
[root@hetzner2 htdocs]# grep -irC5 'bains.hmn' /var/log
[root@hetzner2 htdocs]# 
    1. when I try to email this user using Special:EmailUser, I get an error = "This user has not specified a valid email address."
    2. digging into the DB, I see this user set their email to 'bains.hmn@gmail.com', which seems fine to me
MariaDB [osewiki_db]> select user_email from wiki_user where user_name = 'Hbains' limit 10;
+---------------------+
| user_email          |
+---------------------+
| bains.hmn@gmail.com |
+---------------------+
1 row in set (0.00 sec)

MariaDB [osewiki_db]> 
    1. anyway, let me continue with the one that's not a dead-end. Unfortunately, the IPv6AuthError link just sends me to a generic google "Bulk Sending Guidelines" doc https://support.google.com/mail/?p=IPv6AuthError
    2. I already configured our spf records, but it wants some more shit. First I have to get the "Gmail settings administrator privliges" from Marcin.
  1. actually, the error log specifically mentions "550-5.7.1" more on this specific error number can be found in this "SMTP Error Reference" https://support.google.com/a/answer/3726730?hl=en
    1. 550, "5.7.1", Email quota exceeded.
    2. 550, "5.7.1", Invalid credentials for relay.
    3. 550, "5.7.1", Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been blocked. Review our Bulk Senders Guidelines.
    4. 550, "5.7.1", Our system has detected that this message is likely unsolicited mail. To reduce the amount of spam sent to Gmail, this message has been blocked. For more information, review this article.
    5. 550, "5.7.1", The IP you're using to send mail is not authorized to send email directly to our servers. Please use the SMTP relay at your service provider instead. For more information, review this article.
    6. 550, "5.7.1", The user or domain that you are sending to (or from) has a policy that prohibited the mail that you sent. Please contact your domain administrator for further details. For more information, review this article.
    7. 550, "5.7.1", Unauthenticated email is not accepted from this domain.
    8. 550, "5.7.1", Daily SMTP relay limit exceeded for customer. For more information on SMTP relay sending limits please contact your administrator or review this article.
  2. actually, I don't think any of those are correct. This appears to be caused by us not having a "AAAA" dns entry pointing to our ipv6 address, even though our server has 2x ipv6 addresses. In this case, it appears that our server contacted google from ipv6 address = "2a01:4f8:172:209e::2", but it didn't get that back when it attempted to resolve 'opensourceecology.org' https://serverfault.com/questions/732187/sendmail-can-not-deliver-to-gmail-ipv6-sending-guidelines-regarding-ptr-record
[root@hetzner2 htdocs]# ip -6 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1
	inet6 ::1/128 scope host 
	   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
	inet6 2a01:4f8:172:209e::2/64 scope global 
	   valid_lft forever preferred_lft forever
	inet6 fe80::921b:eff:fe94:7c4/64 scope link 
	   valid_lft forever preferred_lft forever
[root@hetzner2 htdocs]# 
  1. I created a AAAA (ipv6 A) dns record (on cloudflare) pointing opensourceecology.org to 2a01:4f8:172:209e::2
  2. ^ that should take some time to propagate, and--since I can't reproduce the issue, we'll just wait to see if it occurs again & check the logs again
  3. a simpler solution might be to just change postfix to use ipv4 only, but I'll do that as a last resort https://www.linuxquestions.org/questions/linux-newbie-8/gmail-this-message-does-not-meet-ipv6-sending-guidelines-regarding-ptr-records-4175598760/
  4. note that, interestingly, the ptr (reverse lookup) of our ipv4 addresses don't point to opensourceecology.org; they point to hetzner
user@personal:~$ dig +short -x 138.201.84.223
static.223.84.201.138.clients.your-server.de.
user@personal:~$ dig +short -x 138.201.84.243
static.243.84.201.138.clients.your-server.de.
user@personal:~$ 
  1. I'll have to check this tomorrow after propagation takes place. Hopefully if it matches the above commands, we've fixed the issue
user@personal:~$ dig +short -x "2a01:4f8:172:209e::2"
user@personal:~$ 
  1. in the meantime, I've manually reset the users' passwords & sent them emails manually
  1. ...
  1. Marcin had a 403 false-positive when attempting to embed an instagram feed. I whitelisted a rule & confirmed that I could submit the contents.
    1. id 973308, xss attack
    2. this fixed it & I emailed Marcin
  1. ...
  1. Marcin mentioned that a link to our wiki in a facebook feed shows a 403 on facebook. The link works, but the facebook "preview" in the comment feed shows a 403 Forbidden. Because facebook is dumb, I can't permalink directly to the comment (or maybe I could if I had a facebook account--not sure), but it's on this page https://www.facebook.com/groups/398759490316633/#
  2. I grepped through all the gzip'd modsecurity log files with the string 'Paysan' in it, and I found a bunch of results. I limited it further to include 'facebook', and found there was a useragent = facebookexternalhit/1.1. This was causing a 403 from rule id = 958291, protocol violation = "Range: field exists and begins with 0."
[root@hetzner2 httpd]# date
Wed Jul 11 04:09:26 UTC 2018
[root@hetzner2 httpd]# pwd
/var/log/httpd
[root@hetzner2 httpd]# for log in $(ls -1 | grep -i modsec | grep -i gz); do zcat $log | grep -iC50 'Paysan' ; done | grep -iC50 facebook
Server: Apache
Engine-Mode: "ENABLED"

--f6f9de1f-Z--

--1d0c4d75-A--
[08/Jul/2018:06:58:57 +0000] W0G2MRiIV543eZ9b0krEgAAAAAE 127.0.0.1 37540 127.0.0.1 8000
--1d0c4d75-B--
GET /entry/openid?Target=discussion%2F379%2Ffarming-agriculture-and-ranching-livestock-management-software%3Fpost%23Form_Body&url=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid HTTP/1.1
X-Real-IP: 203.133.174.77
X-Forwarded-Proto: https
X-Forwarded-Port: 443  
Host: forum.opensourceecology.org
User-Agent: Mozilla/5.0 (compatible; Daum/4.1; +http://cs.daum.net/faq/15/4118.html?faqId=28966)
Accept-Language: ko-kr,ko;q=0.8,en-us;q=0.5,en;q=0.3
Accept: */*
Accept-Charset: utf-8,EUC-KR;q=0.7,*;q=0.5
X-Forwarded-For: 203.133.174.77, 127.0.0.1, 127.0.0.1
hash: #forum.opensourceecology.org
Accept-Encoding: gzip  
X-Varnish: 13886122

--1d0c4d75-F--
HTTP/1.1 403 Forbidden 
Content-Length: 214
Content-Type: text/html; charset=iso-8859-1

--1d0c4d75-E--

--1d0c4d75-H--
Message: Access denied with code 403 (phase 2). Pattern match "([\\~\\!\\@\\#\\$\\%\\^\\&\\*\\(\\)\\-\\+\\=\\{\\}\\[\\]\\|\\:\\;\"\\'\\\xc2\xb4\\\xe2\x80\x99\\\xe2\x80\x98\\`\\<\\>].*?){4,}" at ARGS:Target. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "159"] [id "981173"] [rev "2"] [msg "Restricted SQL Character Anomaly Detection Alert - Total # of special characters exceeded"] [data "Matched Data: - found within ARGS:Target: discussion/379/farming-agriculture-and-ranching-livestock-management-software?post#Form_Body"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"]
Action: Intercepted (phase 2)
Stopwatch: 1531033137000684 589 (- - -)
Stopwatch2: 1531033137000684 589; combined=362, p1=87, p2=247, p3=0, p4=0, p5=27, sr=20, sw=1, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED" 

--1d0c4d75-Z--

--52e6a01c-A--
[08/Jul/2018:06:59:50 +0000] W0G2ZlraFr00R9M6JipfIQAAAAI 127.0.0.1 37638 127.0.0.1 8000
--52e6a01c-B--
GET /wiki/L%E2%80%99Atelier_Paysan HTTP/1.0
X-Real-IP: 66.220.146.185
X-Forwarded-Proto: https
X-Forwarded-Port: 443  
Host: wiki.opensourceecology.org
Accept: */*
User-Agent: facebookexternalhit/1.1
Range: bytes=0-131071  
X-Forwarded-For: 127.0.0.1
Accept-Encoding: gzip  
X-Varnish: 14190516

--52e6a01c-F--
HTTP/1.1 403 Forbidden 
Content-Length: 225
Connection: close
Content-Type: text/html; charset=iso-8859-1

--52e6a01c-E--

--52e6a01c-H--
Message: Access denied with code 403 (phase 2). String match "bytes=0-" at REQUEST_HEADERS:Range. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "428"] [id "958291"] [rev "2"] [msg "Range: field exists and begins with 0."] [data "bytes=0-131071"] [severity "WARNING"] [ver "OWASP_CRS/2.2.9"] [maturity "6"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/INVALID_HREQ"]
Action: Intercepted (phase 2)
Apache-Handler: php5-script
Stopwatch: 1531033190783654 371 (- - -)
Stopwatch2: 1531033190783654 371; combined=130, p1=87, p2=14, p3=0, p4=0, p5=29, sr=22, sw=0, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED" 

--52e6a01c-Z--

--282b2851-A--
[08/Jul/2018:06:59:51 +0000] W0G2ZxiIV543eZ9b0krEgQAAAAE 127.0.0.1 37642 127.0.0.1 8000
--282b2851-B--
GET /wiki/L%E2%80%99Atelier_Paysan HTTP/1.0
X-Real-IP: 31.13.122.23
X-Forwarded-Proto: https
X-Forwarded-Port: 443  
Host: wiki.opensourceecology.org
Accept: */*
User-Agent: facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)
Range: bytes=0-524287  
X-Forwarded-For: 127.0.0.1
Accept-Encoding: gzip  
X-Varnish: 13886168

--282b2851-F--
HTTP/1.1 403 Forbidden 
Content-Length: 225
Connection: close
Content-Type: text/html; charset=iso-8859-1

--282b2851-E--

--282b2851-H--
Message: Access denied with code 403 (phase 2). String match "bytes=0-" at REQUEST_HEADERS:Range. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "428"] [id "958291"] [rev "2"] [msg "Range: field exists and begins with 0."] [data "bytes=0-524287"] [severity "WARNING"] [ver "OWASP_CRS/2.2.9"] [maturity "6"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/INVALID_HREQ"]
Action: Intercepted (phase 2)
Apache-Handler: php5-script
Stopwatch: 1531033191244028 353 (- - -)
Stopwatch2: 1531033191244028 353; combined=129, p1=85, p2=14, p3=0, p4=0, p5=30, sr=20, sw=0, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED" 

--282b2851-Z--

--41d26d46-A--
[08/Jul/2018:07:03:42 +0000] W0G3ThiIV543eZ9b0krEhAAAAAE 127.0.0.1 38196 127.0.0.1 8000
--41d26d46-B--
GET /entry/register?Target=discussion%2F541%2Fsolved-emailprocessor.php-sends-all-emails-to-039civimail.ignored039 HTTP/1.1
X-Real-IP: 96.73.213.217
X-Forwarded-Proto: https
X-Forwarded-Port: 443  
Host: forum.opensourceecology.org
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.2057.400 QQBrowser/9.5.10158.400
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
DNT: 1
X-Forwarded-For: 96.73.213.217, 127.0.0.1, 127.0.0.1
Accept-Encoding: gzip  
hash: #forum.opensourceecology.org
X-Varnish: 14190751

--41d26d46-F--
[root@hetzner2 httpd]# 
    1. I whitelisted this rule in the vhost config file

Fri Jul 06, 2018

  1. yesterday I calculated that we should backup about ~34.87G of data from hetzner1 to glacier before shutting down the node and terminating its contract
    1. note that this size will likely be much smaller after compression.
  2. I confirmed that we have 128G of available space to '/' on hetzner2
[root@hetzner2 ~]# date
Fri Jul  6 17:59:12 UTC 2018
[root@hetzner2 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2        197G   60G  128G  32% /
devtmpfs         32G     0   32G   0% /dev
tmpfs            32G     0   32G   0% /dev/shm
tmpfs            32G  3.1G   29G  10% /run
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/md1        488M  289M  174M  63% /boot
tmpfs           6.3G     0  6.3G   0% /run/user/1005
[root@hetzner2 ~]# 
  1. we also have 165G of available space on '/usr' on hetzner1
osemain@dedi978:~$ date
Fri Jul  6 19:59:31 CEST 2018
osemain@dedi978:~$ pwd
/usr/home/osemain
osemain@dedi978:~$ df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/dm-0              9.8G  363M  8.9G   4% /
udev                    10M     0   10M   0% /dev
tmpfs                  787M  788K  786M   1% /run
/dev/dm-1              322G  142G  165G  47% /usr
tmpfs                  2.0G     0  2.0G   0% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                  2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/md0                95M   30M   66M  32% /boot
/dev/mapper/vg-tmp     4.8G  308M  4.3G   7% /tmp
/dev/mapper/vg-var      20G  2.3G   17G  13% /var
tmpfs                  2.0G     0  2.0G   0% /var/spool/exim/scan
/dev/mapper/vg-vartmp  5.8G  1.8G  3.8G  32% /var/tmp
osemain@dedi978:~$ 
  1. while it may make sense to do this upload to glacier on hetzern1, I've had hetzner1 terminate my screen sessions randomly in the past. I'd rather do it on hetzner2--where I actually have control over the server with root credentials. Therefore, I think I'll make the compressed tarballs on hetzner1 & scp them to hetzner2. On hetzner2 I'll encrypt the tarballs and create their (also encrypted) corresponding metadata files (listing all the files in the tarballs, for easy/cheaper querying later), and upload both.
  2. I created a wiki article for this CHG, which will be the canonical URL listed in the metadata files for info on what this data is that I've uploaded to glacier https://wiki.opensourceecology.org/wiki/CHG-2018-07-06_hetzner1_deprecation
  3. I discovered that the DBs on hetzner1 are necessarily accessible to the public Internet (ugh).
    1. so I _could_ do the mysqldump from hetnzer2, but it's better to do it locally (data tx & sec), and then compress it _before_ sending it to hetzner2
  4. began backing-up files on osemain
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DBs
dbName=openswh
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=ose_fef
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=ose_website
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=osesurv
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=osewiki
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. began backups on oseblog
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DB
dbName=oseblog
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. began backups on osecivi
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DBs
dbName=osecivi
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=osedrupal
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. began backup of oseforum
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DB
dbName=oseforum
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. began backup of microft
# declare vars
stamp=`date -u +%Y%m%d-%H%M%S`
backupDir="${HOME}/noBackup/final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
backupFilePrefix="final_backup_before_hetzner1_deprecation_`whoami`_${stamp}"
excludeArg1="${HOME}/backups"
excludeArg2="${HOME}/noBackup"

# prepare backup dir
mkdir -p "${backupDir}"

# backup home files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_home.tar.bz2" ${HOME}/*

# backup web root files
time nice tar --exclude "${excludeArg1}" --exclude "${excludeArg2}" -cjvf "${backupDir}/${backupFilePrefix}_webroot.tar.bz2" /usr/www/users/`whoami`/*

# dump DBs
dbName=microft_db2
dbUser=microft_2
dbPass=CHANGEME
time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=microft_drupal1
dbUser=microft_d1u
dbPass=CHANGEME
time nice mysqldump -u"${dbUser}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"

dbName=microft_wiki
dbPass=CHANGEME
time nice mysqldump -u"${dbName}" -p"${dbPass}" --all-databases | bzip2 -c > "${backupDir}/${backupFilePrefix}_mysqldump-${dbName}.${stamp}.sql.bz2"
  1. after compression (but before encryption), here's the resulting sizes of the backups
    1. oseforum
oseforum@dedi978:~$ find noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \;
57M     noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
46M     noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
oseforum@dedi978:~$ 
    1. osecivi 16M
osecivi@dedi978:~/noBackup$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \;
180K	/usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osedrupal.20180706-233128.sql.bz2
2.3M	/usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_home.tar.bz2
12M	/usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_webroot.tar.bz2
1.1M	/usr/home/osecivi/noBackup/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128/final_backup_before_hetzner1_deprecation_osecivi_20180706-233128_mysqldump-osecivi.20180706-233128.sql.bz2
osecivi@dedi978:~/noBackup$ 
<pre>
## oseforum 957M
<pre>
oseforum@dedi978:~$ find $HOME/noBackup/final_backup_before_hetzner1_deprecation_* -type f -exec du -sh '{}' \;
854M    /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_home.tar.bz2
46M     /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_mysqldump-oseforum.20180706-230007.sql.bz2
57M     /usr/home/oseforum/noBackup/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007/final_backup_before_hetzner1_deprecation_oseforum_20180706-230007_webroot.tar.bz2
oseforum@dedi978:~$ 
  1. created a safe dir on hetzner2 to store the files before encrypting & uploading to glacier
[root@hetzner2 tmp]# cd /var/tmp
[root@hetzner2 tmp]# mkdir deprecateHetzner1
[root@hetzner2 tmp]# chown root:root deprecateHetzner1/
[root@hetzner2 tmp]# chmod 0700 deprecateHetzner1/
[root@hetzner2 tmp]# ls -lah deprecateHetzner1/
total 8.0K
drwx------   2 root root 4.0K Jul  6 23:14 .
drwxrwxrwt. 52 root root 4.0K Jul  6 23:14 ..
[root@hetzner2 tmp]# 
  1. ...
  1. while the backups were running on hetzner2, I began looking into migrating hetzner2's active daily backups to s3.
  2. I logged into the aws console for the first time in a couple months, and I saw that our first bill was $5.20 in May, $1.08 in June, and $1.08 in July. Not bad, but that's going to go up after we dump all this hetzner1 stuff in glacier & start using s3 for our dailys. In any case, it'll be far, far, far less than the amount we'll be saving by ending our contract for hetzner1!
  3. I created our first bucket in s3 named 'oseserverbackups'
    1. important: it was set to "do not grant public read access to this bucket" !
  4. looks like I already created an IAM user & creds with access to both glacier & s3. I added this to hetzner2:/root/backups/backup.settings
  5. I installed aws for the root user on hetzner2, added the creds, and confirmed that I could access the new bucket
# create temporary directory
tmpdir=`mktemp -d`

pushd "$tmpdir"

/usr/bin/wget "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip"
/usr/bin/unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws

# cleanly exit
popd
/bin/rm -rf "$tmpdir"
exit 0

[root@hetzner2 tmp.vbm56CUp50]# aws --version aws-cli/1.15.53 Python/2.7.5 Linux/3.10.0-693.2.2.el7.x86_64 botocore/1.10.52 [root@hetzner2 tmp.vbm56CUp50]# aws s3 ls Unable to locate credentials. You can configure credentials by running "aws configure". [root@hetzner2 tmp.vbm56CUp50]# aws configure AWS Access Key ID [None]: <obfuscated> AWS Secret Access Key [None]: <obfuscated> Default region name [None]: us-west-2 Default output format [None]: [root@hetzner2 tmp.vbm56CUp50]# aws s3 ls 2018-07-07 00:05:22 oseserverbackups [root@hetzner2 tmp.vbm56CUp50]#

# successfully tested an upload to s3
<pre>
[root@hetzner2 backups]# cat /var/tmp/test.txt
some file destined for s3 this is
[root@hetzner2 backups]# aws s3 cp /var/tmp/test.txt s3://oseserverbackups/test.txt
upload: ../../var/tmp/test.txt to s3://oseserverbackups/test.txt 
[root@hetzner2 backups]# 
  1. confirmed that I could see the file in the aws console wui
  2. clicked the link for the object, and confirmed that I got an AccessDenied error https://s3-us-west-2.amazonaws.com/oseserverbackups/test.txt
  3. next step: enable lifecycle policy. Ideally, I want to be able to say that files uploaded on the first of the month (either by metadata of the upload timestamp or by regex match on object name) will automatically "freeze" into glaicer after a few days, and all other files will just get deleted automatically after a few days.
    1. so it looks like we can limit by object name match or by tag. It's probably better if we just have our script add a 'monthly-backup' tag to the object when uploading on the first-of-the-month, then have our lifecycle policy built around that bit https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html
    2. ugh, TIL s3 objects under the default storage class = STANDARD_IA have a minimum lifetime of 30 days. If you delete an object before 30 days, you're still charged for 30 days https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html
      1. that means we'll have to store 30 copies of our daily backups at minimum, which are 15G as of now. That's 450G stored to s3 = 0.023 * 450 = $10.35/mo * 12 = $124.2/yr. That sucks.
    3. per my previous research, we may want to look into using one of these providers instead:
      1. Backblaze B2 https://www.backblaze.com/b2/cloud-storage.html
      2. Google Nearline & Coldline https://cloud.google.com/storage/archival/
      3. Microsoft OneDrive https://onedrive.live.com/about/en-us/
  4. a quick calculation on the backblaze price calculator (biased, of course) with initial_upload=15G, monthly_upload=450G, monthly_delete=435G, monthly_download=3G gives a cost of $7.11/year. They say that would cost $30.15/yr on s3, $29.88/yr on google, and $26.10 on Microsoft. Well, at least they're wrong in a good way: it would cost more than that in s3. Hopefully they know their own pricing better. $8/year is great for backing-up 15G every day..

Thr Jul 05, 2018

  1. logged time for last week
  2. using my ose account, I uploaded the remaining misc photos from my visit to FeF to a new album https://photos.app.goo.gl/YZGTQdWnfFWcJc6p8
    1. I created a slideshow out of this & added it to the wiki here https://wiki.opensourceecology.org/wiki/Michael_Photo_Folder
  1. ...
  1. began revisiting hetzner1. We want to dump all the content onto glaicer before we terminate our contract here.
  2. I just checked the billing section. Wow, it's 74.79 eur per month. What a rip-off! Hopefully we won't have to pay that much longer..
  3. because we don't have root, this is more tricky. First, we need to get a list of all the users & investigate what data each has. If the total amount of data is small enough, we can just tar it all up & ship it to glaicer.
  4. it's not an exact test, but skimming through /etc/passwd suggests that there may be 11 users on hetzner1: osemain, osecivi, oseblog, oseforum, oseirc, oseholla, osesurv, sandbox, microft, zabbix, openswh
  5. a better test is probably checking which users' shells are /bin/bash
osemain@dedi978:~$ grep '/bin/bash' /etc/passwd
root:x:0:0:root:/root:/bin/bash
postgres:x:111:113:PostgreSQL administrator,,,:/var/lib/postgresql:/bin/bash
osemain:x:1010:1010:opensourceecology.org:/usr/home/osemain:/bin/bash
osecivi:x:1014:1014:civicrm.opensourceecology.org:/usr/home/osecivi:/bin/bash
oseblog:x:1015:1015:blog.opensourceecology.org:/usr/home/oseblog:/bin/bash
oseforum:x:1016:1016:forum.opensourceecology.org:/usr/home/oseforum:/bin/bash
osesurv:x:1020:1020:survey.opensourceecology.org:/usr/home/osesurv:/bin/bash
microft:x:1022:1022:test.opensourceecology.org:/usr/home/microft:/bin/bash
osemain@dedi978:~$ 
  1. excluding postgres & root, it looks like 6x users (many of the others are addons, and I think they're under 'osemain') = osemain, osecivi, oseblog, oseforum, osesurv, and microft
osemain@dedi978:~$ ls -lah public_html/archive/addon-domains/
total 32K
drwxr-xr-x  8 osemain users   4.0K Jan 18 16:56 .
drwxr-xr-x 13 osemain osemain 4.0K Jun 20  2017 ..
drwxr-xr-x  2 osemain users   4.0K Jul 26  2011 addontest
drwx---r-x  2 osemain users   4.0K Jul 26  2011 holla
drwx---r-x  2 osemain users   4.0K Jul 26  2011 irc
drwxr-xr-x  2 osemain osemain 4.0K Jan 18 16:59 opensourcewarehouse.org
drwxr-xr-x  2 osemain osemain 4.0K Feb 23  2012 sandbox
drwxr-xr-x 13 osemain osemain 4.0K Dec 30  2017 survey
osemain@dedi978:~$ 
  1. I was able to ssh in as osemain, osecivi, oseblog, and oseforum (using my pubkey, so I must have set this up earlier when investigating what I needed to migrate). I was _not_ able to ssh in as 'osesurv' and 'microft'
  2. on the main page of the konsoleh wui after logging in, there's 5 domains listed: "(blog|civicrm|forum|test).opensourceecology.org" and 'opensourceecology.org'. The one that stands out here is 'test.opensourceecology.org'. Upon clicking on it & digging around, I found that the username for this domain is 'microft'.
    1. In this test = microft domain (in the konsoleh wui), I tried to click 'WebFTP' (which is how I would upload my ssh key), but I got an erorr "Could not connect to server dedi978.your-server.de:21 with user microft". Indeed, it looks like the account is "suspended"
    2. to confirm further, I clicked the "FTP" link for the forum account, and confirmed that I could ftp in (ugh) as the user & password supplied by the wui (double-ugh). I tried again using the user/pass from the test=microft domain, and I could not login
    3. ^ that said, It *does* list it as using 4.49G of disk space + 3 DBs
    4. the 3 DBs are mysql = microft_db2 (24.3M), microft_drupal1 (29.7M), and microft_wiki (19.4G). Holy shit, 19.4G DB!
      1. digging into the last db's phpmyadmin, I see a table named "wiki_objectcache" that's 4.2G, "wiki_searchindex" that's 2.7G, and "wiki_text" that's 7.4G. This certainly looks like a Mediawiki DB.
      2. from the wiki_user table, the last user_id = 1038401 = Traci Clutter, which was created on 20150702040307
  3. I found that all these accounts are still accessible from a subdomain of our dedi978.your-server.de dns:
    1. http://blog.opensourceecology.org.dedi978.your-server.de/
      1. this one gives a 500 internal server error
    2. http://civicrm.opensourceecology.org.dedi978.your-server.de/
      1. this one actually loads a drupal page with a login, though the only content is " Welcome to OSE CRM / No front page content has been created yet."
    3. http://forum.opensourceecology.org.dedi978.your-server.de/
      1. this one still loads, and appears to be fully functional (ugh)
    4. http://test.opensourceecology.org.dedi978.your-server.de/
      1. this gives a 403 forbidden with the comment "You don't have permission to access / on this server." "Server unable to read htaccess file, denying access to be safe"
  4. In digging through the test.opensourceecology.org domain's settings, I found "Services -> Settings -> Block / Unblock". It (unlike the others) was listed as "Status: Blocked." So I clicked the "Unblock it" button and got "The domain has been successfully unblocked.".
    1. now WebFTP worked
    2. this now loads too http://test.opensourceecology.org.dedi978.your-server.de/ ! It's pretty horribly broken, but it appears to be a "True Fans Drupal" "Microfunding Proposal" site. I wouldn't be surprised if it got "blocked" due to being a hacked outdated version of drupal.
    3. WebFTP didn't let me upload a .ssh dir (it appears to not work with hidden dirs = '.' prefix), but I was able to FTP in (ugh)
    4. I downloaded the existing .ssh/authorized_keys file, added my key to the end of it, and re-uploaded it
    5. I was able to successfully ssh-in!
  5. ok, now that I have access to what I believe to be all the accounts, let's determine what they've got in files
  6. I found a section of the hetzner konsoleh wui that shows sizes per account (Under Statistics -> Account overview)
    1. opensourceecology.org 99.6G
    2. blog.opensourceecology.org 8.71G
    3. test.opensourceecology.org 4.49G
    4. forum.opensourceecology.org 1.15G
    5. civicrm.opensourceecology.org 170M
    6. ^ all sites display "0G" for "Traffic"
  7. osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup'
osemain@dedi978:~$ date
Fri Jul  6 01:20:41 CEST 2018
osemain@dedi978:~$ pwd
/usr/home/osemain
osemain@dedi978:~$ whoami
osemain
osemain@dedi978:~$ du -sh * --exclude='noBackup'
983M	backups
1.3M	bin
4.0K	composer.json
36K	composer.lock
4.0K	cron
4.0K	emails.txt
9.8M	extensions
16K	freemind.sourceforge.net
4.0K	id-dsa-iphone.pub
4.0K	id_rsa-hetzner
4.0K	id_rsa-hetzner.pub
288K	installer
0	jboss
470M	jboss-4.2.3.GA
4.0K	jboss-command-line.txt
234M	jdk1.6.0_29
0	jdk-6
808K	mbkp
0	opensourceecology.org
4.0K	passwd.cdb
4.0K	PCRE-patch
0	public_html
4.0K	uc?id=0B1psBarfpPkzb0JQV1B6Z01teVk
28K	users
16K	var-run
2.9M	vendor
4.0K	videos
4.0K	wiki_olddocroot
1.1M	wrapper-linux-x86-64-3.5.13
2.6G	www_logs
osemain@dedi978:~$ du -sh --exclude='noBackup'
5.7G	.
osemain@dedi978:~$ 
  1. osemain has 5.7G, not including the websites that we migrated--whoose data has been moved to 'noBackup'
  2. oseblog has 2.7G
oseblog@dedi978:~$ date
Fri Jul  6 02:39:11 CEST 2018
oseblog@dedi978:~$ pwd
/usr/home/oseblog
oseblog@dedi978:~$ whoami
oseblog
oseblog@dedi978:~$ du -sh *
8.0K	bin
0	blog.opensourceecology.org
12K	cron
788K	mbkp
349M	oftblog.dump
4.0K	passwd.cdb
0	public_html
208K	tmp
104K	users
1.2G	www_logs
oseblog@dedi978:~$ du -sh
2.7G	.
oseblog@dedi978:~$ 
  1. osecivi has 44M
osecivi@dedi978:~$ date
Fri Jul  6 02:40:19 CEST 2018
osecivi@dedi978:~$ pwd
/usr/home/osecivi
osecivi@dedi978:~$ whoami
osecivi
osecivi@dedi978:~$ du -sh *
4.0K	bin
0	civicrm.opensourceecology.org
4.0K	civimail-errors.txt
2.0M	CiviMail.ignored-2011
20K	civimail.out
20K	cron
2.5M	d7-civicrm.dump
828K	d7-drupal.dump
788K	mbkp
2.2M	oftcivi.dump
8.0M	oftdrupal.dump
4.0K	passwd.cdb
0	public_html
4.0K	pw.txt
28K	users
3.4M	www_logs
osecivi@dedi978:~$ du -sh
44M	.
osecivi@dedi978:~$ 
  1. oseforum has 1.1G
oseforum@dedi978:~$ date
Fri Jul  6 02:41:04 CEST 2018
oseforum@dedi978:~$ pwd
/usr/home/oseforum
oseforum@dedi978:~$ whoami
oseforum
oseforum@dedi978:~$ du -sh *
8.0K	bin
16K	cron
0	forum.opensourceecology.org
788K	mbkp
7.5M	oftforum.dump
4.0K	passwd.cdb
0	public_html
102M	tmp
14M	users
11M	vanilla-2.0.18.1
756M	www_logs
oseforum@dedi978:~$ du -sh
1.1G	.
oseforum@dedi978:~$ 
  1. microft has 1.8G
microft@dedi978:~$ date
Fri Jul  6 02:42:00 CEST 2018
microft@dedi978:~$ pwd
/usr/home/microft
microft@dedi978:~$ whoami
microft
microft@dedi978:~$ du -sh *
8.8M	db-backup
3.6M	drupal.sql
1.6M	drush
44M	drush-backups
1.7M	git_repos
376M	mbkp-wiki-db
18M	mediawiki-1.20.2.tar.gz
4.0K	passwd.cdb
0	public_html
28K	users
1.3G	www_logs
microft@dedi978:~$ du -sh
1.8G	.
microft@dedi978:~$ 
  1. those numbers above are files only. It doesn't include mailboxes or databases. I don't really care about mailboxes (they're probably unused), but I do want to backup databases.
  2. osemain has 5 databases:
    1. openswh 7.51M
    2. ose_fef 3.65M
    3. ose_website 32M
    4. osesurv 697K
    5. osewiki 2.48G
    6. there doesn't appear to be any DBs for the 'addon' domains under this domain (addontest, holla, irc, opensourcwarehouse, sandbox, survey)
  3. oseblog has 1 db
    1. oseblog 1.23G
  4. osecivi has 2 dbs
    1. osecivi 31.3M
    2. osedrupal 8.05M
  5. oseforum has 1 db
    1. oseforum 182M
  6. microft has 3 dbs
    1. microft_db2 24.3M
    2. microft_drupal1 33.4M
    3. microft_wiki 19.5G
  7. so the total size of file data to backup are 5.7+2.7+0.04+1.1+1.8 = 11.34G
  8. and the total size of db data to backup is 0.007+0.003+0.03+0.001+2.48+1.23+0.03+0.08+0.1+0.02+0.02+0.03+19.5 = 23.53G
  9. therefore, the total size of db backups to push to glacier so we can feel safe permanently shutting down hetzner1 is 34.87G
    1. note that this size will likely be much smaller after compression.