Maltfield Log/2020 Q1

From Open Source Ecology
Jump to: navigation, search

My work log from the year 2020 Quarter 1. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.

See Also

  1. Maltfield_Log
  2. User:Maltfield
  3. Special:Contributions/Maltfield

Tue Mar 31, 2020

  1. the unattended-upgrades logs look like anacron actually kicked it off, but that it failed due to some lock?
==> /var/log/unattended-upgrades/unattended-upgrades-dpkg.log <==
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 48062 files and directories currently installed.)
Preparing to unpack .../sudo_1.8.27-1+deb10u2_amd64.deb ...
Unpacking sudo (1.8.27-1+deb10u2) over (1.8.27-1+deb10u1) ...
Setting up sudo (1.8.27-1+deb10u2) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Processing triggers for systemd (241-7~deb10u3) ...
Log ended: 2020-03-25  08:06:22


==> /var/log/unattended-upgrades/unattended-upgrades.log <==
2020-03-26 05:56:45,291 INFO Initial whitelist:
2020-03-26 05:56:45,291 INFO Starting unattended upgrades script
2020-03-26 05:56:45,293 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
2020-03-26 05:56:45,295 ERROR Lock could not be acquired (another package manager running?)
2020-03-26 05:56:46,098 INFO Checking if system is running on battery is skipped. Please install powermgmt-base package to check power status and skip installing updates when the system is running on battery.
2020-03-26 05:56:46,108 INFO Initial blacklist :
2020-03-26 05:56:46,109 INFO Initial whitelist:
2020-03-26 05:56:46,109 INFO Starting unattended upgrades script
2020-03-26 05:56:46,110 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
2020-03-26 05:56:46,111 ERROR Lock could not be acquired (another package manager running?)
  1. indeed it looks like sudo didn't get updated
root@osestaging1-discourse-ose:/tmp/iptables.bak# dpkg -l | grep -i sudo
ii  sudo                            1.8.27-1+deb10u1             amd64        Provide limited super user privileges to specific users
root@osestaging1-discourse-ose:/tmp/iptables.bak# 
  1. it looks like it tried twice and at least downloaded the package, but stopped short of installing it. I think I'll wait a bit to see if it actually goes through; maybe this was just some race condition that blocked the install by coincidence
  2. regarding the "could not determine current runlevel" error it looks like a commonly reported error for docker users attempting to run linux containers on windows machines https://github.com/microsoft/WSL/issues/1761
    1. this may be because systemd is not installed? No, that can't be. I've manually kicked-off unattended-upgrades myself on the CLI. Maybe it's an issue with some missing env var? Anyway, I'll just leave it for some days and see..
    2. something something ebtables https://answers.microsoft.com/en-us/windows/forum/windows_10-windows_install/errors-in-ubuntu-1804-on-windows-10/fe349f3d-3d58-4d90-9f8f-c14d7c12af8b
  3. ...
  4. back to varnish
  5. I went to switch the reverse proxy from pointing to the "inner nginx" socket file to point instead to varnish on localhost. I noticed a few differences
    1. here's the block for the current reverse proxy pointing to the "inner nginx" socket file
##################
# SEND TO DOCKER #
##################

location / {
#proxy_pass http://unix:/var/discourse/shared/standalone/nginx.http.sock:;
#proxy_pass http://127.0.0.1:8020/;
#proxy_set_header Host $http_host;
#proxy_http_version 1.1; 
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Proto https;
#proxy_set_header X-Real-IP $remote_addr;
}
    1. And here's the block that we're using for pointing to varnish over, for example, our wiki
###################
# SEND TO VARNISH #
###################

   location / {
	  proxy_pass http://127.0.0.1:6081;
	  proxy_set_header X-Real-IP $remote_addr;
	  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	  proxy_set_header X-Forwarded-Proto https;
	  proxy_set_header X-Forwarded-Port 443;
	  proxy_set_header Host $host;
   }
    1. note a few differences, namely
      1. the Discourse block includes setting the http_version to 1.1, but that's absent from our usual varnish config
      2. the Discourse block sets the 'Host:' header is set to '$http_host', but our usual varnish config sets the 'Host:' header to '$host'
      3. the Discourse block doesn't set the 'X-Forwarded-Port', but our usual varnish config does set the 'X-Forwarded-Port' header to '443'
  1. I ended up adding the http_version header to our usual config and left everything else the same
[root@osestaging1 conf.d]# tail -n30 discourse.opensourceecology.org.conf

###################
# SEND TO VARNISH #
###################

   location / {
	  proxy_pass http://127.0.0.1:6081;
	  proxy_set_header X-Real-IP $remote_addr;
	  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	  proxy_set_header X-Forwarded-Proto https;
	  proxy_set_header X-Forwarded-Port 443;
	  proxy_set_header Host $host;
	  proxy_http_version 1.1;
   }

##################
# SEND TO DOCKER #
##################

#location / {
#proxy_pass http://unix:/var/discourse/shared/standalone/nginx.http.sock:;
#proxy_pass http://127.0.0.1:8020/;
#proxy_set_header Host $http_host;
#proxy_http_version 1.1;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Proto https;
#proxy_set_header X-Real-IP $remote_addr;
#}

}
[root@osestaging1 conf.d]# nginx -t
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.openbuildinginstitute.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.openbuildinginstitute.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.openbuildinginstitute.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.openbuildinginstitute.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] conflicting server name "_" on 10.241.189.11:443, ignored
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@osestaging1 conf.d]# 
  1. And I updated the varnish config to have the backend for 'discourse.opensourceecology.org' be 127.0.0.1:8020
[root@osestaging1 conf.d]# head -n25 /etc/varnish/sites-enabled/discourse.opensourceecology.org 
################################################################################
# File:    discourse.opensourceecology.org.vcl
# Version: 0.1
# Purpose: Confg file for ose's discourse site
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2019-03-23
# Updated: 2019-03-23
################################################################################

vcl 4.0;

##########################
# VHOST-SPECIFIC BACKEND #
##########################

backend discourse_opensourceecology_org {
		.host = "127.0.0.1";
		.port = "8020";
}

sub vcl_recv {

		if ( req.http.host == "discourse.opensourceecology.org" ){

				set req.backend_hint = discourse_opensourceecology_org;
[root@osestaging1 conf.d]# 
  1. I updated the varnish main config file to enable the discourse-specific varnish config and restarted varnish to apply the changes
[root@osestaging1 conf.d]# cat /etc/varnish/all-vhosts.vcl 
################################################################################
# File:    all-hosts.vcl
# Version: 1.5
# Purpose: meta config file that simply imports the site-specific vcl files
#          stored in the 'sites-enabled' directory Please see this for more info
#            * https://www.getpagespeed.com/server-setup/varnish/varnish-virtual-hosts
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2017-11-12
# Updated: 2020-03-31
################################################################################

include "sites-enabled/staging.openbuildinginstitute.org";
include "sites-enabled/staging.opensourceecology.org";
include "sites-enabled/awstats.openbuildinginstitute.org";
include "sites-enabled/awstats.opensourceecology.org";
include "sites-enabled/munin.opensourceecology.org";

include "sites-enabled/www.openbuildinginstitute.org";
include "sites-enabled/www.opensourceecology.org";
include "sites-enabled/seedhome.openbuildinginstitute.org";
include "sites-enabled/fef.opensourceecology.org";
include "sites-enabled/oswh.opensourceecology.org";
include "sites-enabled/forum.opensourceecology.org";
include "sites-enabled/wiki.opensourceecology.org";
include "sites-enabled/phplist.opensourceecology.org";
include "sites-enabled/microfactory.opensourceecology.org";
include "sites-enabled/store.opensourceecology.org";
include "sites-enabled/discourse.opensourceecology.org";
[root@osestaging1 conf.d]# varnishd -Cf /etc/varnish/default.vcl > /dev/null && systemctl restart varnish
[root@osestaging1 conf.d]# 
  1. And it's working!
user@ose:~$ curl -kI https://discourse.opensourceecology.org/
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 31 Mar 2020 08:54:23 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Vary: Accept-Encoding
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Permitted-Cross-Domain-Policies: none
Referrer-Policy: strict-origin-when-cross-origin
X-Discourse-Route: list/latest
Cache-Control: no-cache, no-store
Content-Security-Policy: base-uri 'none'; object-src 'none'; script-src 'report-sample' https://discourse.opensourceecology.org/logs/ https://discourse.opensourceecology.org/sidekiq/ https://discourse.opensourceecology.org/mini-profiler-resources/ https://discourse.opensourceecology.org/assets/ https://discourse.opensourceecology.org/brotli_asset/ https://discourse.opensourceecology.org/extra-locales/ https://discourse.opensourceecology.org/highlight-js/ https://discourse.opensourceecology.org/javascripts/ https://discourse.opensourceecology.org/plugins/ https://discourse.opensourceecology.org/theme-javascripts/ https://discourse.opensourceecology.org/svg-sprite/; worker-src 'self' blob:
X-Discourse-Cached: skip
X-Request-Id: 10fbc90b-2623-4ac4-b509-1ec1c76675f0
X-Runtime: 0.096138
X-Discourse-TrackView: 1
X-Varnish: 2
Age: 0
Via: 1.1 varnish-v4
Strict-Transport-Security: max-age=15552001
Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report"

user@ose:~$ 
  1. now I'll test to make sure that the cache is actually working for something easily cacheable: an image
    1. I run this on my laptop
user@ose:~$ curl -kI https://discourse.opensourceecology.org/images/discourse-logo-sketch.png
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 31 Mar 2020 09:05:36 GMT
Content-Type: image/png
Content-Length: 169105
Connection: keep-alive
Last-Modified: Mon, 30 Mar 2020 13:03:31 GMT
X-Varnish: 56 131126
Age: 15
Via: 1.1 varnish-v4
Strict-Transport-Security: max-age=15552001
Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report"

user@ose:~$ 
    1. and this on the server shows a hit & deliver (returned from cache)
[root@osestaging1 conf.d]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 10.241.189.50'"
* << Request  >> 56
-   Begin          req 55 rxreq
-   Timestamp      Start: 1585645536.846397 0.000000 0.000000
-   Timestamp      Req: 1585645536.846397 0.000000 0.000000
-   ReqStart       127.0.0.1 48750
-   ReqMethod      HEAD
-   ReqURL         /images/discourse-logo-sketch.png
-   ReqProtocol    HTTP/1.1
-   ReqHeader      X-Real-IP: 10.241.189.50
-   ReqHeader      X-Forwarded-For: 10.241.189.50
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: discourse.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      User-Agent: curl/7.52.1
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 10.241.189.50
-   ReqHeader      X-Forwarded-For: 10.241.189.50, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 10.241.189.50, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   Hit            131126
-   VCL_call       HIT
-   VCL_return     deliver
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Server: nginx
-   RespHeader     Date: Tue, 31 Mar 2020 09:05:21 GMT
-   RespHeader     Content-Type: image/png
-   RespHeader     Content-Length: 169105
-   RespHeader     Last-Modified: Mon, 30 Mar 2020 13:03:31 GMT
-   RespHeader     X-Varnish: 56 131126
-   RespHeader     Age: 15
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1585645536.846511 0.000114 0.000114
-   Debug          "RES_MODE 2"
-   RespHeader     Connection: close
-   Timestamp      Resp: 1585645536.846541 0.000145 0.000031
-   Debug          "XXX REF 2"
-   ReqAcct        254 0 254 237 0 237
-   End

  1. now I clear the varnish cache and try it again, confirming that this time it came from the backend. note the differences between a hit and a miss
[root@osestaging1 conf.d]# varnishadm 'ban req.url ~ "."'

[root@osestaging1 conf.d]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 10.241.189.50'"
* << Request  >> 58
-   Begin          req 57 rxreq
-   Timestamp      Start: 1585645664.703570 0.000000 0.000000
-   Timestamp      Req: 1585645664.703570 0.000000 0.000000
-   ReqStart       127.0.0.1 48760
-   ReqMethod      HEAD
-   ReqURL         /images/discourse-logo-sketch.png
-   ReqProtocol    HTTP/1.1
-   ReqHeader      X-Real-IP: 10.241.189.50
-   ReqHeader      X-Forwarded-For: 10.241.189.50
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: discourse.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      User-Agent: curl/7.52.1
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 10.241.189.50
-   ReqHeader      X-Forwarded-For: 10.241.189.50, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 10.241.189.50, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   ExpBan         131126 banned lookup
-   Debug          "XXXX MISS"
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 59 fetch
-   Timestamp      Fetch: 1585645664.721922 0.018352 0.018352
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Server: nginx
-   RespHeader     Date: Tue, 31 Mar 2020 09:07:44 GMT
-   RespHeader     Content-Type: image/png
-   RespHeader     Content-Length: 169105
-   RespHeader     Last-Modified: Mon, 30 Mar 2020 13:03:31 GMT
-   RespHeader     X-Varnish: 58
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1585645664.721999 0.018429 0.000077
-   Debug          "RES_MODE 2"
-   RespHeader     Connection: close
-   Timestamp      Resp: 1585645664.722054 0.018484 0.000055
-   Debug          "XXX REF 2"
-   ReqAcct        254 0 254 229 0 229
-   End

  1. I started testing WUI requests for '/' and seeing how varnish handles it.
  2. I noticed that the 'Accept-Encoding' header is being overwritten, noting that the 'br' option that discourse likes is being removed. I think this was some Mediawiki-specific option, so I removed it from the varnish config
  3. strangely, it *still* got unset during the RECV VCL_return step after I removed it from the config & restarted varnish
* << Request  >> 262173
-   Begin          req 262172 rxreq
-   Timestamp      Start: 1585646863.774483 0.000000 0.000000
-   Timestamp      Req: 1585646863.774483 0.000000 0.000000
-   ReqStart       127.0.0.1 49292
-   ReqMethod      GET
-   ReqURL         /
-   ReqProtocol    HTTP/1.1
-   ReqHeader      X-Real-IP: 10.241.189.50
-   ReqHeader      X-Forwarded-For: 10.241.189.50
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: discourse.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      Cache-Control: max-age=0
-   ReqHeader      Upgrade-Insecure-Requests: 1
-   ReqHeader      User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36
-   ReqHeader      Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
-   ReqHeader      Accept-Encoding: gzip, deflate, br
-   ReqHeader      Accept-Language: en-US,en;q=0.9
-   ReqUnset       X-Forwarded-For: 10.241.189.50
-   ReqHeader      X-Forwarded-For: 10.241.189.50, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 10.241.189.50, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   ReqUnset       Accept-Encoding: gzip, deflate, br
-   ReqHeader      Accept-Encoding: gzip
-   VCL_call       HASH
-   VCL_return     lookup
  1. but the only config file that should be doing this is the wiki site
[root@osestaging1 sites-enabled]# grep -irl 'encoding' /etc/varnish/
/etc/varnish/sites-enabled/wiki.opensourceecology.org
[root@osestaging1 sites-enabled]# 
  1. I logged-into Discourse and checked the browser's cookies. After login, I found my browser sending cookie headers with the following names:
    1. _forum_session
    2. _t
  2. I searched the Discourse meta site for these, and I came across the following thread listing all the cookies used by Discourse https://meta.discourse.org/t/list-of-cookies-used-by-discourse/83690
    1. I guess I'll have varnish not cache if any of the following cookies are present, and strip all other cookies before caching:
      1. _forum_session
      2. _t
      3. email
      4. _bypass_cache
  3. I started experiencing some issues with 403s. Looks like the "Log Out" button in Discourse uses the DELETE request https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/DELETE
10.241.189.50 - - [31/Mar/2020:11:37:52 +0000] "DELETE /session/maltfield0 HTTP/1.1" 444 0 "https://discourse.opensourceecology.org/u/maltfield0/preferences/account" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36" "-"
    1. https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/DELETE
  1. yeah, we've disabled TRACE & DELETE for security purposes; guess I'm going to have to overwride this for the discourse site :\
[root@osestaging1 sites-enabled]# cat /etc/nginx/conf.d/secure.include 
################################################################################
# File:    secure.include
# Version: 0.1
# Purpose: Basic security settings that couldn't be put in the main nginx.conf.
#          This should be included in the server{} blocks nginx vhosts.
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2017-11-23
# Updated: 2017-11-23
################################################################################

   # whitelist requests to disable TRACE & DELETE
   if ($request_method !~ ^(GET|HEAD|POST)$ ) {
	  # note: 444 is a meta code; it doesn't return anything, actually
	  #       it just logs, drops, & closes the connection (useful
	  #       against malware)
	  return 444;
   }

   ## block some bot's useragents (may need to remove some, if impacts SEO)
   if ($blockedagent) {
	  return 403;
   }
[root@osestaging1 sites-enabled]# 
  1. I commented-out the line that includes the global 'secure.include' file and added-back a Discourse-specific modified version of its contents
[root@osestaging1 sites-enabled]# grep -EA30  '^server {' /etc/nginx/conf.d/discourse.opensourceecology.org.conf 
server {

   access_log /var/log/nginx/discourse.opensourceecology.org/access.log main;
   error_log /var/log/nginx/discourse.opensourceecology.org/error.log;

   # we can't use the global 'secure.include' file for Discourse, which
   # requires use of the DELETE http method, for example
   #include conf.d/secure.include;

   # whitelist requests to disable TRACE
   if ($request_method !~ ^(GET|HEAD|POST|DELETE)$ ) {
	  # note: 444 is a meta code; it doesn't return anything, actually
	  #       it just logs, drops, & closes the connection (useful
	  #       against malware)
	  return 444;
   }

   ## block some bot's useragents (may need to remove some, if impacts SEO)
   if ($blockedagent) {
	  return 403;
   }

   include conf.d/ssl.opensourceecology.org.include;
   #include conf.d/ssl.openbuildinginstitute.org.include;

   listen 10.241.189.11:443;
   #listen [2a01:4f8:172:209e::2]:443;

   server_name discourse.opensourceecology.org;

#############
[root@osestaging1 sites-enabled]# 
  1. meanwhile, there's lots of errors causing 403 errors on the "inner nginx" mod_security setup, for example:
==> error.log <==
2020/03/31 12:39:30 [error] 750#750: *406 [client 172.17.0.1] ModSecurity: Access denied with code 403 (phase 2). Matched "Operator `Ge' with parameter `5' against variable `TX:ANOMALY_SCORE' (Value: `5' ) [file "/usr/share/modsecurity-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "80"] [id "949110"] [rev ""] [msg "Inbound Anomaly Score Exceeded (Total Score: 5)"] [data ""] [severity "2"] [ver ""] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "172.17.0.2"] [uri "/session/maltfield0"] [unique_id "158565837038.357221"] [ref ""], client: 172.17.0.1, server: _, request: "DELETE /session/maltfield0 HTTP/1.1", host: "discourse.opensourceecology.org", referrer: "https://discourse.opensourceecology.org/u/maltfield0/preferences/account"

==> modsec_audit.log <==
---IwwgdQ7Y---A--
[31/Mar/2020:12:39:30 +0000] 158565837038.357221 172.17.0.1 39726 172.17.0.2 80
---IwwgdQ7Y---B--
DELETE /session/maltfield0 HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36
X-Forwarded-Proto: https
X-CSRF-Token: IQUR2Y8mUZRPLbQnWxQxFnIClj21sZEHUlLRZ2BherTUIlJL7RYNSwk4xGLiRnifCtqFmSj9QwXIYAYCpNQF7Q==
Origin: https://discourse.opensourceecology.org
Discourse-Present: true
Host: discourse.opensourceecology.org
X-Forwarded-Port: 443
Pragma: no-cache
Referer: https://discourse.opensourceecology.org/u/maltfield0/preferences/account
Cache-Control: no-cache
Accept: */*
X-Real-IP: 10.241.189.50
Discourse-Logged-In: true
X-Requested-With: XMLHttpRequest
Accept-Encoding: gzip, deflate, br
Cookie: _forum_session=QlFmdWVVZEhGbXBRSHJ1QTRkMDdXclNVeXNTZ0NLSU56YStqbm9TOTFDSWhZYWVrYVB2dVhnd0FtM0tJK2dQbHRUWmtKa1ZKaWovZWROS01yTnhoWmhPR2poOXBDYjk2ZFY1b3lxVVR6U1RwU3NSNllFMDZsWExVcktTNWF3T2t3SXRoZlQzeFBVUlRmWDlIeDl0Q1FjbnpMWWZCRGQ4OTlMTTJkaVJiWlBualVjNDUraFkrNVpOVXdTS0I0V29qSVFiS3dwZ1BLZUtDeFIxM0dvcmRCenZ2WUVQK1BXcG5JSURCa1B2c0JRYVE1VW80Wm1IN0VmSTBuZFdZWDExYXBadHVYVTlPRlViZ3JONUZzS1l0NWc9PS0tbnQ1S0hpeUt0cENCRXNSYTNSVGdjQT09--f1af3d501e91f8c4d03af7649e3b4c894fd9d251; _t=f648339fb90e355cbe15b9f4055afef4
Accept-Language: en-US,en;q=0.9
X-Forwarded-For: 127.0.0.1
X-Varnish: 393612

---IwwgdQ7Y---D--

---IwwgdQ7Y---E--
<html>\x0d\x0a<head><title>403 Forbidden</title></head>\x0d\x0a<body>\x0d\x0a<center><h1>403 Forbidden</h1></center>\x0d\x0a<hr><center>nginx</center>\x0d\x0a</body>\x0d\x0a</html>\x0d\x0a<!-- a padding to disable MSIE and Chrome friendly error page -->\x0d\x0a<!-- a padding to disable MSIE and Chrome friendly error page -->\x0d\x0a<!-- a padding to disable MSIE and Chrome friendly error page -->\x0d\x0a<!-- a padding to disable MSIE and Chrome friendly error page -->\x0d\x0a<!-- a padding to disable MSIE and Chrome friendly error page -->\x0d\x0a<!-- a padding to disable MSIE and Chrome friendly error page -->\x0d\x0a

---IwwgdQ7Y---F--
HTTP/1.1 403
Server: nginx
Date: Tue, 31 Mar 2020 12:39:30 GMT
Content-Length: 548
Content-Type: text/html
Connection: keep-alive

---IwwgdQ7Y---H--
ModSecurity: Warning. Matched "Operator `Within' with parameter `GET HEAD POST OPTIONS' against variable `REQUEST_METHOD' (Value: `DELETE' ) [file "/usr/share/modsecurity-crs/rules/REQUEST-911-METHOD-ENFORCEMENT.conf"] [line "27"] [id "911100"] [rev ""] [msg "Method is not allowed by policy"] [data "DELETE"] [severity "2"] [ver "OWASP_CRS/3.1.0"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [tag "OWASP_CRS/POLICY/METHOD_NOT_ALLOWED"] [tag "WASCTC/WASC-15"] [tag "OWASP_TOP_10/A6"] [tag "OWASP_AppSensor/RE1"] [tag "PCI/12.1"] [hostname "172.17.0.2"] [uri "/session/maltfield0"] [unique_id "158565837038.357221"] [ref "v0,6"]
ModSecurity: Access denied with code 403 (phase 2). Matched "Operator `Ge' with parameter `5' against variable `TX:ANOMALY_SCORE' (Value: `5' ) [file "/usr/share/modsecurity-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "80"] [id "949110"] [rev ""] [msg "Inbound Anomaly Score Exceeded (Total Score: 5)"] [data ""] [severity "2"] [ver ""] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "172.17.0.2"] [uri "/session/maltfield0"] [unique_id "158565837038.357221"] [ref ""]
ModSecurity: Warning. Matched "Operator `Ge' with parameter `5' against variable `TX:INBOUND_ANOMALY_SCORE' (Value: `5' ) [file "/usr/share/modsecurity-crs/rules/RESPONSE-980-CORRELATION.conf"] [line "76"] [id "980130"] [rev ""] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 5 - SQLI=0,XSS=0,RFI=0,LFI=0,RCE=0,PHPI=0,HTTP=0,SESS=0): Method is not allowed by policy; individual paranoia level scores: 5, 0, 0, 0"] [data ""] [severity "0"] [ver ""] [maturity "0"] [accuracy "0"] [tag "event-correlation"] [hostname "172.17.0.2"] [uri "/session/maltfield0"] [unique_id "158565837038.357221"] [ref ""]

---IwwgdQ7Y---I--

---IwwgdQ7Y---J--

---IwwgdQ7Y---Z--
  1. I added my first exception to the "inner nginx" modsecurity config file and restarted nginx
root@osestaging1-discourse-ose:/etc/nginx/conf.d# cat /etc/nginx/conf.d/modsecurity.include 
################################################################################
# File:    modsecurity.include
# Version: 0.1
# Purpose: Defines mod_security rules for the discourse vhost
#          This should be included in the server{} blocks nginx vhosts.
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2019-11-12
# Updated: 2019-11-12
################################################################################
Include "/etc/modsecurity/modsecurity.conf"

# OWASP Core Rule Set, installed from the 'modsecurity-crs' package in debian
Include /etc/modsecurity/crs/crs-setup.conf
Include /usr/share/modsecurity-crs/rules/*.conf

SecRuleRemoveById 949110
root@osestaging1-discourse-ose:/etc/nginx/conf.d# sv restart nginx
ok: run: nginx: (pid 26682) 0s
root@osestaging1-discourse-ose:/etc/nginx/conf.d# 
  1. Aaand now the logout works!
  2. ok, next week I'm just going to have to keep iterating on this modsecurity & varnish issues until the config is sane, then do a e2e install test

Mon Mar 30, 2020

  1. looks like unattended-upgrades *still* hasn't updated the cron package
root@osestaging1-discourse-ose:/var/www/discourse# dpkg -l | grep -i sudo
ii  sudo                            1.8.27-1+deb10u1             amd64        Provide limited super user privileges to specific users
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. and the logs are spammed with this
root@osestaging1-discourse-ose:/var/log# tail -f syslog unattended-upgrades/*
==> syslog <==
Mar 30 06:35:01 osestaging1-discourse-ose CRON[29030]: Cannot make/remove an entry for the specified session
Mar 30 06:45:01 osestaging1-discourse-ose CRON[30254]: Cannot make/remove an entry for the specified session
Mar 30 06:55:01 osestaging1-discourse-ose CRON[31477]: Cannot make/remove an entry for the specified session
Mar 30 07:05:01 osestaging1-discourse-ose CRON[32701]: Cannot make/remove an entry for the specified session
Mar 30 07:15:01 osestaging1-discourse-ose CRON[1523]: Cannot make/remove an entry for the specified session
Mar 30 07:17:01 osestaging1-discourse-ose CRON[1768]: Cannot make/remove an entry for the specified session
Mar 30 07:25:01 osestaging1-discourse-ose CRON[2747]: Cannot make/remove an entry for the specified session
Mar 30 07:30:01 osestaging1-discourse-ose CRON[3362]: Cannot make/remove an entry for the specified session
Mar 30 07:35:01 osestaging1-discourse-ose CRON[3974]: Cannot make/remove an entry for the specified session
Mar 30 07:45:01 osestaging1-discourse-ose CRON[5196]: Cannot make/remove an entry for the specified session
  1. this time I'm going to test commenting-out the pam_loginuid.so line in /etc/pam.d/crond from the associated bug with this error message that I discovered last week https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=726661
root@osestaging1-discourse-ose:/etc/cron.d# vim /etc/pam.d/cron 
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/pam.d/cron 
# The PAM configuration file for the cron daemon

@include common-auth

# Sets the loginuid process attribute
#session    required     pam_loginuid.so

# Read environment variables from pam_env's default files, /etc/environment
# and /etc/security/pam_env.conf.
session       required   pam_env.so

# In addition, read system locale information
session       required   pam_env.so envfile=/etc/default/locale

@include common-account
@include common-session-noninteractive 

# Sets up user limits, please define limits for cron tasks
# through /etc/security/limits.conf
session    required   pam_limits.so

root@osestaging1-discourse-ose:/etc/cron.d# ps -ef | grep -i cron
root       706   532  0 Mar26 ?        00:00:00 runsv cron
root      7423   706  0 08:03 ?        00:00:00 cron -f
root     11841   569  0 08:39 pts/1    00:00:00 grep -i cron
root@osestaging1-discourse-ose:/etc/cron.d# sv restart cron
ok: run: cron: (pid 11848) 1s
root@osestaging1-discourse-ose:/etc/cron.d# ps -ef | grep -i cron
root       706   532  0 Mar26 ?        00:00:00 runsv cron
root     11848   706  0 08:39 ?        00:00:00 cron -f
root     11853   569  0 08:39 pts/1    00:00:00 grep -i cron
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. if that doesn't work, then I'm giving up on crond and going to try putting this in the anacron daily dir
  2. ...
  3. meanwhile, let's pickup varnish again
  4. first, the site gives me a 502 bad gateway message
user@ose:~$ curl -kI https://discourse.opensourceecology.org/
HTTP/1.1 502 Bad Gateway
Server: nginx
Date: Mon, 30 Mar 2020 09:03:23 GMT
Content-Type: text/html
Content-Length: 150
Connection: keep-alive

user@ose:~$ 
  1. looks like the cause is an issue with the nginx config file in the container
root@osestaging1-discourse-ose:/etc/cron.d# nginx -t
nginx: [emerg] unknown directive "modsecurity" in /etc/nginx/conf.d/discourse.conf:38
nginx: configuration file /etc/nginx/nginx.conf test failed
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. additionally, that's causing the docker host server's nginx to spit out this error
[root@osestaging1 nginx]# tail -f *.log discourse.opensourceecology.org/*.log

2020/03/30 09:10:21 [crit] 27294#0: *57 connect() to unix:/var/discourse/shared/standalone/nginx.http.sock failed (2: No such file or directory) while connecting to upstream, client: 10.241.189.50, server: discourse.opensourceecology.org, request: "GET / HTTP/1.1", upstream: "http://unix:/var/discourse/shared/standalone/nginx.http.sock:/", host: "discourse.opensourceecology.org"

==> discourse.opensourceecology.org/access.log <==
10.241.189.50 - - [30/Mar/2020:09:10:21 +0000] "GET / HTTP/1.1" 502 552 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36" "-"

==> discourse.opensourceecology.org/error.log <==
2020/03/30 09:10:21 [crit] 27294#0: *57 connect() to unix:/var/discourse/shared/standalone/nginx.http.sock failed (2: No such file or directory) while connecting to upstream, client: 10.241.189.50, server: discourse.opensourceecology.org, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:/var/discourse/shared/standalone/nginx.http.sock:/favicon.ico", host: "discourse.opensourceecology.org", referrer: "https://discourse.opensourceecology.org/"

==> discourse.opensourceecology.org/access.log <==
10.241.189.50 - - [30/Mar/2020:09:10:21 +0000] "GET /favicon.ico HTTP/1.1" 502 552 "https://discourse.opensourceecology.org/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36" "-"

^C
[root@osestaging1 nginx]# 
  1. ...which makes sense, because the socket file doesn't actually exist in the container since nginx isn't even running
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /shared/
total 48K
drwxr-xr-x. 11 root      root     4.0K Mar 26 06:03 .
drwxr-xr-x.  1 root      root     4.0K Mar 25 09:44 ..
drwxr-xr-x.  3 discourse www-data 4.0K Mar 16 16:52 backups
drwxr-xr-x.  4 root      root     4.0K Mar 16 15:50 log
drwxr-xr-x.  2 postgres  postgres 4.0K Mar 16 15:24 postgres_backup
drwx------. 19 postgres  postgres 4.0K Mar 26 06:03 postgres_data
drwxrwxr-x.  3 postgres  postgres 4.0K Mar 26 06:03 postgres_run
drwxr-xr-x.  2 redis     redis    4.0K Mar 30 09:08 redis_data
drwxr-xr-x.  4 root      root     4.0K Mar 16 16:23 state
drwxr-xr-x.  4 discourse www-data 4.0K Mar 26 06:03 tmp
drwxr-xr-x.  3 discourse www-data 4.0K Mar 16 16:16 uploads
root@osestaging1-discourse-ose:/etc/cron.d# ps -ef | grep -i nginx
root       539   532  0 Mar26 ?        00:10:53 runsv nginx
root     15833   569  0 09:11 pts/1    00:00:00 grep -i nginx
root@osestaging1-discourse-ose:/etc/cron.d# sv status nginx
down: nginx: 0s, normally up, want up
root@osestaging1-discourse-ose:/etc/cron.d# sv start nginx
timeout: down: nginx: 0s, normally up, want up
root@osestaging1-discourse-ose:/etc/cron.d# ps -ef | grep -i nginx
root       539   532  0 Mar26 ?        00:10:53 runsv nginx
root     15882   569  0 09:11 pts/1    00:00:00 grep -i nginx
root@osestaging1-discourse-ose:/etc/cron.d# sv status nginx
down: nginx: 0s, normally up, want up
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. it does in-fact look like the nginx installed on the discourse container doesn't have ModSecurity, which should have been built into it at compile time when we did the `./launcher rebuild`. what happened?
root@osestaging1-discourse-ose:/etc/cron.d# nginx -V
nginx version: nginx/1.14.2
built with OpenSSL 1.1.1c  28 May 2019 (running with OpenSSL 1.1.1d  10 Sep 2019)
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-tBUzFN/nginx-1.14.2=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-auth-pam --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-dav-ext --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-echo --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-upstream-fair --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-subs-filter
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. it appears that our 'install-nginx' script *does* still have the ModSecurity updates
[root@osestaging1 image]# grep ModSec /var/discourse/image/base/install-nginx
git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git
./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli --add-module=/tmp/ModSecurity-nginx
rm -fr /tmp/ModSecurity-nginx
[root@osestaging1 image]# 
  1. And that launcher script should be using our custom-built image with the above-compiled nginx
[root@osestaging1 discourse]# grep 'image=' /var/discourse/launcher
user_run_image=""
	user_run_image="$2"
#image="discourse/base:2.0.20200220-2221"
image="discourse_ose"
  run_image=`cat $config_file | $docker_path run $user_args --rm -i -a stdin -a stdout $image ruby -e \
	run_image=$user_run_image
	run_image="$local_discourse/$config"
  base_image=`cat $config_file | $docker_path run $user_args --rm -i -a stdin -a stdout $image ruby -e \
	image=$base_image
[root@osestaging1 discourse]# 
  1. I tested the images directly; the 'discourse_ose' one does not seem to have the nginx ModSecurity module
[root@osestaging1 ~]# docker image ls
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              b4d3feecf9e1        5 days ago          2.62GB
<none>                          <none>              ab155e01a784        5 days ago          2.62GB
<none>                          <none>              74a546348196        5 days ago          2.62GB
<none>                          <none>              01fdb35472a2        5 days ago          2.62GB
<none>                          <none>              aadefb6214d5        6 days ago          2.61GB
<none>                          <none>              0339dd9638e0        6 days ago          2.61GB
<none>                          <none>              4d0573e1315a        6 days ago          2.61GB
<none>                          <none>              1e49a67ad290        6 days ago          2.61GB
<none>                          <none>              ee4f4a7346c8        6 days ago          2.61GB
<none>                          <none>              6e6c81a35291        6 days ago          2.61GB
<none>                          <none>              4d92ff0b76a7        13 days ago         2.59GB
discourse_ose                   latest              2ea22070a06d        2 weeks ago         2.33GB
[root@osestaging1 ~]#
[root@osestaging1 discourse]# docker exec -it discourse_ose bash
root@osestaging1-discourse-ose:/# which nginx
/usr/sbin/nginx
root@osestaging1-discourse-ose:/# /usr/sbin/nginx -V
nginx version: nginx/1.14.2
built with OpenSSL 1.1.1c  28 May 2019 (running with OpenSSL 1.1.1d  10 Sep 2019)
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-tBUzFN/nginx-1.14.2=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-auth-pam --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-dav-ext --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-echo --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-upstream-fair --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-subs-filter
root@osestaging1-discourse-ose:/# 
  1. I checked both of the images (not sure the difference), and they *both* have nginx with ModSecurity. what gives?
[root@osestaging1 ~]# docker run --rm -it --entrypoint /bin/bash 2ea22070a06d
root@848a91dd79d5:/# which nginx
/usr/sbin/nginx
root@848a91dd79d5:/# /usr/sbin/nginx -V
nginx version: nginx/1.17.4
built by gcc 8.3.0 (Debian 8.3.0-6) 
built with OpenSSL 1.1.1d  10 Sep 2019
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli --add-module=/tmp/ModSecurity-nginx
root@848a91dd79d5:/# exit
[root@osestaging1 ~]# 
[root@osestaging1 ~]# docker run --rm -it --entrypoint /bin/bash b4d3feecf9e1
root@db62a79cd1f5:/# which nginx
/usr/sbin/nginx
root@db62a79cd1f5:/# /usr/sbin/nginx -V
nginx version: nginx/1.17.4
built by gcc 8.3.0 (Debian 8.3.0-6) 
built with OpenSSL 1.1.1d  10 Sep 2019
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli --add-module=/tmp/ModSecurity-nginx
root@db62a79cd1f5:/# exit
[root@osestaging1 ~]# 
  1. so I've proven that the image on which the discourse_ose container is built has the ModSecurity image, but the container itself does not. Why?!?
  2. I restarted the container and connected again; same issue
[root@osestaging1 ~]# docker ps
CONTAINER ID        IMAGE                           COMMAND             CREATED             STATUS              PORTS               NAMES
024f2e0d34e7        local_discourse/discourse_ose   "/sbin/boot"        5 days ago          Up 4 days                               discourse_ose
[root@osestaging1 ~]# docker stop discourse_ose
discourse_ose
[root@osestaging1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@osestaging1 ~]# docker start discourse_ose
discourse_ose
[root@osestaging1 ~]# docker ps
CONTAINER ID        IMAGE                           COMMAND             CREATED             STATUS              PORTS               NAMES
024f2e0d34e7        local_discourse/discourse_ose   "/sbin/boot"        5 days ago          Up 6 seconds                            discourse_ose
[root@osestaging1 ~]# docker exec -it 024f2e0d34e7 bash
root@osestaging1-discourse-ose:/# nginx -V
nginx version: nginx/1.14.2
built with OpenSSL 1.1.1c  28 May 2019 (running with OpenSSL 1.1.1d  10 Sep 2019)
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-tBUzFN/nginx-1.14.2=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-auth-pam --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-dav-ext --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-echo --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-upstream-fair --add-dynamic-module=/build/nginx-tBUzFN/nginx-1.14.2/debian/modules/http-subs-filter
root@osestaging1-discourse-ose:/# 
  1. fuck it; let me just try the whole rebuild process again; maybe I somehow broke something when playing with cron? Ugh, that failed
[root@osestaging1 ~]# time /var/discourse/launcher rebuild discourse_ose
fatal: ref HEAD is not a symbolic ref
Stopping old container
+ /bin/docker stop -t 10 discourse_ose
discourse_ose
cd /pups && git pull && /pups/bin/pups --stdin
Already up to date.
I, [2020-03-30T10:05:46.024552 #1]  INFO -- : Loading --stdin
I, [2020-03-30T10:05:46.064748 #1]  INFO -- : File > /etc/cron.d/unattended-upgrades  chmod:   chown: 
I, [2020-03-30T10:05:46.065502 #1]  INFO -- : > /bin/echo -e "\n" >> /etc/cron.d/unattended-upgrades
I, [2020-03-30T10:05:46.082359 #1]  INFO -- : 
I, [2020-03-30T10:05:46.082797 #1]  INFO -- : > /usr/bin/sv restart cron
I, [2020-03-30T10:05:46.092291 #1]  INFO -- : warning: cron: unable to open supervise/ok: file does not exist



FAILED
--------------------
Pups::ExecError: /usr/bin/sv restart cron failed with return #<Process::Status: pid 17 exit 1>
Location of failure: /pups/lib/pups/exec_command.rb:112:in `spawn'
exec failed with the params "/usr/bin/sv restart cron"
ef3831609b3d660a2f732457a43cb1221136a40c41abbacd984c7b584511eb10
 FAILED TO BOOTSTRAP  please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.

real    0m30.386s
user    0m1.266s
sys     0m1.184s
[root@osestaging1 ~]# 
  1. well, I guess it didn't like my unattended-upgrade sv restart cron change from before
  2. if that isn't working, I'll go ahead and update it to use anacron
[root@osestaging1 templates]# cat unattended-upgrades.template.yml 
run:
  - file:
	 path: /etc/cron.daily/unattended-upgrades
	 contents: |+
		#!/bin/bash
		################################################################################
		# File:    /etc/cron.daily/unattended-upgrades
		# Version: 0.1
		# Purpose: run unattended-upgrades in lieu of systemd. For more info see
		#           * https://wiki.opensourceecology.org/wiki/Discourse
		#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
		# Author:  Michael Altfield <michael@opensourceecology.org>
		# Created: 2020-03-23
		# Updated: 2020-03-23
		################################################################################
		/usr/bin/nice /usr/bin/unattended-upgrades --debug
        

  - exec: /bin/echo -e "\n" >> /etc/cron.daily/unattended-upgrades
[root@osestaging1 templates]# 
  1. I kicked-off a rebuild again, but it failed when attempting to remove the old container
[root@osestaging1 templates]# time /var/discourse/launcher rebuild discourse_ose
...
171:M 30 Mar 2020 10:22:50.099 # Redis is now ready to exit, bye bye...
2020-03-30 10:22:50.131 UTC [54] LOG:  database system is shut down
sha256:3d56f54cae88531901d6c848ced40ee167211b5f49a998f68f6cb6a89097f72f
1b7ecd1ed4b5e619a2453e6dff03fc65a670741aaee8dd5999bf2e235d1cefa8
Removing old container
+ /bin/docker rm discourse_ose
Error response from daemon: container 024f2e0d34e7bb124c619d099c447cc9253caf034c699a809bacad807966983f: driver "overlay2" failed to remove root filesystem: unlinkat /var/lib/docker/overlay2/bc489394b31e54312d1a520292f39b5bb1b6d86debefc02c0ff4094ffd23975d/merged: device or resource busy

starting up existing container
+ /bin/docker start discourse_ose
Error response from daemon: container is marked for removal and cannot be started
Error: failed to start containers: discourse_ose

real    9m18.247s
user    0m1.611s
sys     0m1.443s
[root@osestaging1 templates]# 
  1. I removed it manually
[root@osestaging1 templates]# docker ps -a --no-trunc
CONTAINER ID                                                       IMAGE                                                                     COMMAND                CREATED             STATUS                         PORTS               NAMES
241a7ba2dc07dfae986656ba6a4bfc7979966bc85460a92b1f1f7c80981d3888   2ea22070a06d                                                              "/sbin/boot"           47 minutes ago      Exited (6) 45 minutes ago                          amazing_ardinghelli
b5203dbb31b44dc4bf9fe9d6e48ddcafc5ec0ad26a1fb921d7d72125842ba4c4   sha256:b4d3feecf9e1474daa9ce45d1b965013bfec08d7c393814e2dc30a0486445f68   "/usr/sbin/nginx -V"   About an hour ago   Created                                            nice_gates
e9951377d0314c5f32f7a559156a110f4b8bc57c799e45023ae6cd4ce016998c   sha256:b4d3feecf9e1474daa9ce45d1b965013bfec08d7c393814e2dc30a0486445f68   "whoami"               About an hour ago   Exited (0) About an hour ago                       infallible_archimedes
4980888c90dc552f9b0a9794e1614d0469b5668715e2a3460615a1c2459b08ee   sha256:b4d3feecf9e1474daa9ce45d1b965013bfec08d7c393814e2dc30a0486445f68   "/usr/sbin/nginx -V"   About an hour ago   Created                                            dazzling_williams
30328a972f6d3df290c2423512e445d877c4e5c1ca27f1bf26351084f17bc76e   discourse_ose                                                             "/usr/sbin/nginx -V"   About an hour ago   Created                                            nice_goldberg
ce69aba97c8a498722d9916635bbf899dece6416b1a5af883635fc03437f0342   discourse_ose                                                             "nginx -V"             About an hour ago   Created                                            zealous_mclaren
024f2e0d34e7bb124c619d099c447cc9253caf034c699a809bacad807966983f   sha256:b4d3feecf9e1474daa9ce45d1b965013bfec08d7c393814e2dc30a0486445f68   "/sbin/boot"           5 days ago          Removal In Progress                                discourse_ose
[root@osestaging1 templates]# systemctl stop docker
[root@osestaging1 templates]# rm -rf /var/lib/docker/containers/024f2e0d34e7bb124c619d099c447cc9253caf034c699a809bacad807966983f
[root@osestaging1 templates]# systemctl start docker
[root@osestaging1 templates]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS                         PORTS               NAMES
241a7ba2dc07        2ea22070a06d        "/sbin/boot"           48 minutes ago      Exited (6) 46 minutes ago                          amazing_ardinghelli
b5203dbb31b4        b4d3feecf9e1        "/usr/sbin/nginx -V"   About an hour ago   Created                                            nice_gates
e9951377d031        b4d3feecf9e1        "whoami"               About an hour ago   Exited (0) About an hour ago                       infallible_archimedes
4980888c90dc        b4d3feecf9e1        "/usr/sbin/nginx -V"   About an hour ago   Created                                            dazzling_williams
30328a972f6d        discourse_ose       "/usr/sbin/nginx -V"   About an hour ago   Created                                            nice_goldberg
ce69aba97c8a        discourse_ose       "nginx -V"             About an hour ago   Created                                            zealous_mclaren
[root@osestaging1 templates]# 
  1. ok, this time it came up with ModSecurity. no idea why the last one didn't have it..
[root@osestaging1 templates]# /var/discourse/launcher start discourse_ose

+ /bin/docker run --shm-size=512m -d --restart=always -e LANG=en_US.UTF-8 -e RAILS_ENV=production -e UNICORN_WORKERS=2 -e UNICORN_SIDEKIQS=1 -e RUBY_GLOBAL_METHOD_CACHE_SIZE=131072 -e RUBY_GC_HEAP_GROWTH_MAX_SLOTS=40000 -e RUBY_GC_HEAP_INIT_SLOTS=400000 -e RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.5 -e DISCOURSE_DB_SOCKET=/var/run/postgresql -e DISCOURSE_DB_HOST= -e DISCOURSE_DB_PORT= -e DISCOURSE_HOSTNAME=discourse.opensourceecology.org -e DISCOURSE_DEVELOPER_EMAILS=discourse@opensourceecology.org,ops@opensourceecology.org,marcin@opensourceecology.org,michael@opensourceecology.org -e DISCOURSE_SMTP_ADDRESS=172.17.0.1 -e DISCOURSE_SMTP_PORT=25 -e DISCOURSE_SMTP_AUTHENTICATION=none -e DISCOURSE_SMTP_OPENSSL_VERIFY_MODE=none -e DISCOURSE_SMTP_ENABLE_START_TLS=false -h osestaging1-discourse-ose -e DOCKER_HOST_IP=172.17.0.1 --name discourse_ose -t -v /var/discourse/shared/standalone:/shared -v /var/discourse/shared/standalone/log/var-log:/var/log --mac-address 02:fc:97:b8:b4:0d --cap-add NET_ADMIN local_discourse/discourse_ose /sbin/boot
36e92bae0fde4bcb4cfc9227591295847073480afcac8119d5a3b53f6ad368ce
[root@osestaging1 templates]# /var/discourse/launcher enter discourse_ose
root@osestaging1-discourse-ose:/var/www/discourse# nginx -V
nginx version: nginx/1.17.4
built by gcc 8.3.0 (Debian 8.3.0-6) 
built with OpenSSL 1.1.1d  10 Sep 2019
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli --add-module=/tmp/ModSecurity-nginx
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. Aaand now I can connect to the Discourse WUI in the browser again, finally.
  2. meanwhile, I went ahead and downgraded the sudo package
[root@osestaging1 discourse]# /var/discourse/launcher enter discourse_ose
root@osestaging1-discourse-ose:/var/www/discourse# apt-get install sudo=1.8.27-1+deb10u1
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be DOWNGRADED:
  sudo
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 3 not upgraded.
Need to get 1,244 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://security.debian.org/debian-security buster/updates/main amd64 sudo amd64 1.8.27-1+deb10u1 [1,244 kB]
Fetched 1,244 kB in 0s (9,770 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
dpkg: warning: downgrading sudo from 1.8.27-1+deb10u2 to 1.8.27-1+deb10u1
(Reading database ... 48062 files and directories currently installed.)
Preparing to unpack .../sudo_1.8.27-1+deb10u1_amd64.deb ...
Unpacking sudo (1.8.27-1+deb10u1) over (1.8.27-1+deb10u2) ...
Setting up sudo (1.8.27-1+deb10u1) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Processing triggers for systemd (241-7~deb10u3) ...
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. I confirmed that the new unattended-upgrades file was created as a script in the anacron 'cron.daily' dir
root@osestaging1-discourse-ose:/var/www/discourse# ls -lah /etc/cron.daily/
total 52K
drwxr-xr-x. 1 root root 4.0K Mar 30 10:15 .
drwxr-xr-x. 1 root root 4.0K Mar 30 10:37 ..
-rwxr-xr-x. 1 root root  311 May 19  2019 0anacron
-rwxr-xr-x. 1 root root 1.5K May 28  2019 apt-compat
-rwxr-xr-x. 1 root root 1.2K Apr 19  2019 dpkg
-rwxr-xr-x. 1 root root 4.1K Sep 27  2019 exim4-base
-rwxr-xr-x. 1 root root  377 Aug 28  2018 logrotate
-rwxr-xr-x. 1 root root  249 Sep 27  2017 passwd
-rw-r--r--. 1 root root  102 Oct 11 07:58 .placeholder
-rwxr-xr-x. 1 root root  441 Apr  6  2019 sysstat
-rw-r--r--. 1 root root  633 Mar 30 10:15 unattended-upgrades
root@osestaging1-discourse-ose:/var/www/discourse# cat /etc/cron.daily/unattended-upgrades 
#!/bin/bash
################################################################################
# File:    /etc/cron.daily/unattended-upgrades
# Version: 0.1
# Purpose: run unattended-upgrades in lieu of systemd. For more info see
#           * https://wiki.opensourceecology.org/wiki/Discourse
#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2020-03-23
# Updated: 2020-03-23
################################################################################
/usr/bin/nice /usr/bin/unattended-upgrades --debug




root@osestaging1-discourse-ose:/var/www/discourse# 
  1. Anyway, back to varnish
  2. I just realized a potential arch issue with varnish: we'll need nginx (ssl termination) -> varnish -> nginx (container web server). I expect that the first nginx and varnish will live on the docker host and the last nginx (whoose config file is complex & manged by the Discourse team) will live in the Discourse docker container. But our current config has that last nginx inside the container being connected-to via unix socket file from the first Nginx. In order to continue with varnish in-between, varnish must support defining a backend using a unix socket file, but I don't see how to do this by reading the varnish backend documentation https://varnish-cache.org/docs/trunk/users-guide/vcl-backends.html
  3. looks like we'd use the '.path' field for the 'backend' declaration instead of the usual '.host' and '.port' https://varnish-cache.org/docs/trunk/whats-new/upgrading-6.0.html#upd-6-0-uds-backend
  4. unfortunately, it appears that varnish added support for unix sockets (both for clients and backend servers) only with Varnish 6.0, but we're only running varnish 4 https://varnish-cache.org/docs/trunk/whats-new/changes-6.0.html?highlight=socket
[root@osestaging1 conf.d]# rpm -qa | grep -i varnish
varnish-libs-4.0.5-1.el7.x86_64
varnish-libs-devel-4.0.5-1.el7.x86_64
varnish-4.0.5-1.el7.x86_64
[root@osestaging1 conf.d]# 
  1. I documented this on Server Fault to hehttps://serverfault.com/questions/1010062/does-varnish-support-unix-domain-socket-fileslp others https://serverfault.com/questions/1010062/does-varnish-support-unix-domain-socket-files
  2. switching from a unix domain socket to listening over the network on the container's nginx is going to be a bit non-trivial and will require reworking the install guide, but it does seem like the best option on CentOS7 running varnish from the repos pegged to v4
  3. the next question is: what is the hostname and port that the docker host's nginx config should specify to access the Discourse container's nginx server?
    1. our existing architecture on prod has
      1. nginx listening on 443 (and 80, just to redirect to 443) forwarding to
      2. varnish listening on 6081 (as the VARNISH_LISTEN_PORT == cache for nginx) and 6082 (as the VARNISH_ADMIN_LISTEN_PORT == the admin port for making changes to varnishd) forwarding to
      3. Apache listening on 8000 for internet-accessible sites (wordpress sites, wiki, phplist, etc) and 8010 for private sites (like awstats & munin)
    2. therefore, I think I should have the discourse host listen on 127.0.0.1:8020 which we can setup Docker to pass this traffic to the container's 127.0.0.1:80 port using the following in the "expose" block of the container yml file
		 - "8020:80" # fwd host port 8020 to container port 80 (http)
    1. See Also
      1. https://meta.discourse.org/t/running-other-websites-on-the-same-machine-as-discourse/17247/261
      2. https://www.digitalocean.com/community/questions/need-help-with-installing-discourse-and-wordpress
      3. https://blog.khophi.co/install-run-discourse-behind-nginx-right-way-first-time/
  1. I updated the install guide (which will have to be tested end-to-end again) and the container yaml file
[root@osestaging1 containers]# head -n40 discourse_ose.yml
## this is the all-in-one, standalone Discourse Docker container template
##
## After making changes to this file, you MUST rebuild
## /var/discourse/launcher rebuild app
##
## BE *VERY* CAREFUL WHEN EDITING!
## YAML FILES ARE SUPER SUPER SENSITIVE TO MISTAKES IN WHITESPACE OR ALIGNMENT!
## visit http://www.yamllint.com/ to validate this file as needed

docker_args: "--cap-add NET_ADMIN"

templates:
  - "templates/unattended-upgrades.template.yml"
  - "templates/iptables.template.yml"
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
#  - "templates/web.socketed.template.yml"
  - "templates/web.modsecurity.template.yml"
  - "templates/web.ratelimited.template.yml"
## Uncomment these two lines if you wish to add Lets Encrypt (https)
  #- "templates/web.ssl.template.yml"
  #- "templates/web.letsencrypt.ssl.template.yml"

## which TCP/IP ports should this container expose?
## If you want Discourse to share a port with another webserver like Apache or nginx,
## see https://meta.discourse.org/t/17247 for details
#expose:
#  - "80:80"   # http
#  - "443:443" # https

expose:
  - "8020:80" # fwd host port 8020 to container port 80 (http)


params:
  db_default_text_search_config: "pg_catalog.english"

  ## Set db_shared_buffers to a max of 25% of the total memory.
  ## will be set automatically by bootstrap based on detected RAM, or you can override
[root@osestaging1 containers]# 
  1. and I kicked-off a new rebuild
[root@osestaging1 containers]# time /var/discourse/launcher rebuild discourse_ose
...
171:M 30 Mar 2020 13:09:20.409 * DB saved on disk
171:M 30 Mar 2020 13:09:20.410 # Redis is now ready to exit, bye bye...
2020-03-30 13:09:20.457 UTC [54] LOG:  database system is shut down
sha256:7a74ca8e10cbfa7a4fff5cd82c0af200b60107db711beb0039a571afb0c6894a
5b9b724b6e1b1085cf8d9992721cacaa72282b44f4f18d52ad5d45c6870ea98a
Removing old container
+ /bin/docker rm discourse_ose
discourse_ose

+ /bin/docker run --shm-size=512m -d --restart=always -e LANG=en_US.UTF-8 -e RAILS_ENV=production -e UNICORN_WORKERS=2 -e UNICORN_SIDEKIQS=1 -e RUBY_GLOBAL_METHOD_CACHE_SIZE=131072 -e RUBY_GC_HEAP_GROWTH_MAX_SLOTS=40000 -e RUBY_GC_HEAP_INIT_SLOTS=400000 -e RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.5 -e DISCOURSE_DB_SOCKET=/var/run/postgresql -e DISCOURSE_DB_HOST= -e DISCOURSE_DB_PORT= -e DISCOURSE_HOSTNAME=discourse.opensourceecology.org -e DISCOURSE_DEVELOPER_EMAILS=discourse@opensourceecology.org,ops@opensourceecology.org,marcin@opensourceecology.org,michael@opensourceecology.org -e DISCOURSE_SMTP_ADDRESS=172.17.0.1 -e DISCOURSE_SMTP_PORT=25 -e DISCOURSE_SMTP_AUTHENTICATION=none -e DISCOURSE_SMTP_OPENSSL_VERIFY_MODE=none -e DISCOURSE_SMTP_ENABLE_START_TLS=false -h osestaging1-discourse-ose -e DOCKER_HOST_IP=172.17.0.1 --name discourse_ose -t -p 8020:80 -v /var/discourse/shared/standalone:/shared -v /var/discourse/shared/standalone/log/var-log:/var/log --mac-address 02:fc:97:b8:b4:0d --cap-add NET_ADMIN local_discourse/discourse_ose /sbin/boot
15a32ba3c8e485f9591c7925dcd48ee44ca0216e4df99570a29e3b04990267dd

real    8m39.950s
user    0m2.409s
sys     0m2.298s
[root@osestaging1 containers]#
  1. And I downgraded sudo again
[root@osestaging1 discourse]# /var/discourse/launcher enter discourse_ose
root@osestaging1-discourse-ose:/var/www/discourse# apt-get install sudo=1.8.27-1+deb10u1
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be DOWNGRADED:
  sudo
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 3 not upgraded.
Need to get 1,244 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://security.debian.org/debian-security buster/updates/main amd64 sudo amd64 1.8.27-1+deb10u1 [1,244 kB]
Fetched 1,244 kB in 0s (6,493 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
dpkg: warning: downgrading sudo from 1.8.27-1+deb10u2 to 1.8.27-1+deb10u1
(Reading database ... 48062 files and directories currently installed.)
Preparing to unpack .../sudo_1.8.27-1+deb10u1_amd64.deb ...
Unpacking sudo (1.8.27-1+deb10u1) over (1.8.27-1+deb10u2) ...
Setting up sudo (1.8.27-1+deb10u1) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Processing triggers for systemd (241-7~deb10u3) ...
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. it appears that the docker host now has some docker process listening on port 8020
[root@osestaging1 containers]# ss -plan | grep -i 8020
tcp    LISTEN     0      128      :::8020                 :::*                   users:(("docker-proxy",pid=13528,fd=4))
[root@osestaging1 containers]# 
  1. meanwhile I can no longer hit the Discourse WUI (not surprising)
user@ose:~$ curl -kI https://discourse.opensourceecology.org/
HTTP/1.1 502 Bad Gateway
Server: nginx
Date: Mon, 30 Mar 2020 13:19:55 GMT
Content-Type: text/html
Content-Length: 150
Connection: keep-alive

user@ose:~$ 
  1. I changed the "outer" nginx from the sock to the 8020 port. and restarted nginx.
[root@osestaging1 containers]# #vim /etc/nginx/conf.d/discourse.opensourceecology.org.conf 
[root@osestaging1 containers]# tail /etc/nginx/conf.d/discourse.opensourceecology.org.conf 
#proxy_pass http://unix:/var/discourse/shared/standalone/nginx.http.sock:;
proxy_pass http://127.0.0.1:8020/;
proxy_set_header Host $http_host;
proxy_http_version 1.1; 
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Real-IP $remote_addr;
}

}
[root@osestaging1 containers]# 
[root@osestaging1 containers]# nginx -t
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.openbuildinginstitute.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.openbuildinginstitute.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.openbuildinginstitute.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.openbuildinginstitute.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/ssl.opensourceecology.org.include:11
nginx: [warn] conflicting server name "_" on 10.241.189.11:443, ignored
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@osestaging1 containers]# systemctl restart nginx
[root@osestaging1 containers]# 
  1. And then the 502 changed to a timeout; I bet this is due to iptables
user@ose:~$ curl -kI https://discourse.opensourceecology.org/
HTTP/1.1 504 Gateway Time-out
Server: nginx
Date: Mon, 30 Mar 2020 13:23:20 GMT
Content-Type: text/html
Content-Length: 160
Connection: keep-alive

user@ose:~$ 
  1. Here's the IPtables
root@osestaging1-discourse-ose:/var/www/discourse# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            
DROP       all  --  localhost            anywhere            
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
DROP       all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  localhost            localhost           
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere             owner UID match root
ACCEPT     all  --  anywhere             anywhere             owner UID match _apt
DROP       all  --  anywhere             anywhere            
# Warning: iptables-legacy tables present, use iptables-legacy to see them
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. sure enough, flushing the iptables rules made the site accessible now
root@osestaging1-discourse-ose:/var/www/discourse# cd /tmp
root@osestaging1-discourse-ose:/tmp# ls
root@osestaging1-discourse-ose:/tmp# mkdir iptables.bak
root@osestaging1-discourse-ose:/tmp# cd iptables.bak/
root@osestaging1-discourse-ose:/tmp/iptables.bak# iptables-save > iptables-save.a
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
root@osestaging1-discourse-ose:/tmp/iptables.bak# iptables -F
root@osestaging1-discourse-ose:/tmp/iptables.bak# iptables-save
# Generated by xtables-save v1.8.2 on Mon Mar 30 13:24:49 2020
*filter
:INPUT ACCEPT [16:2465]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [14:3367]
COMMIT
# Completed on Mon Mar 30 13:24:49 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
root@osestaging1-discourse-ose:/tmp/iptables.bak# 
  1. a quick tcpdump shows that the container sees traffic coming-in from '172.17.0.1' (which I guess is why it's not included in the existing 'loopback ACCEPT' rule in the INPUT chain
root@osestaging1-discourse-ose:/tmp/iptables.bak# tcpdump -ns0 | less
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:28:08.872050 IP 172.17.0.1.59748 > 172.17.0.2.80: Flags [S], seq 571394495, win 29200, options [mss 1460,sackOK,TS val 930437494 ecr 0,nop,wscale 7], length 0
13:28:08.872163 IP 172.17.0.2.80 > 172.17.0.1.59748: Flags [S.], seq 1274123738, ack 571394496, win 28960, options [mss 1460,sackOK,TS val 930437494 ecr 930437494,nop,wscale 7], length 0
13:28:08.872218 IP 172.17.0.1.59748 > 172.17.0.2.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 930437494 ecr 930437494], length 0
13:28:08.872756 IP 172.17.0.1.59748 > 172.17.0.2.80: Flags [P.], seq 1:473, ack 1, win 229, options [nop,nop,TS val 930437495 ecr 930437494], length 472: HTTP: GET / HTTP/1.1
13:28:08.872772 IP 172.17.0.2.80 > 172.17.0.1.59748: Flags [.], ack 473, win 235, options [nop,nop,TS val 930437495 ecr 930437495], length 0
13:28:08.916602 IP 172.17.0.2.80 > 172.17.0.1.59748: Flags [.], seq 1:7241, ack 473, win 235, options [nop,nop,TS val 930437539 ecr 930437495], length 7240: HTTP: HTTP/1.1 200 OK
13:28:08.916640 IP 172.17.0.1.59748 > 172.17.0.2.80: Flags [.], ack 7241, win 342, options [nop,nop,TS val 930437539 ecr 930437539], length 0
13:28:08.916650 IP 172.17.0.2.80 > 172.17.0.1.59748: Flags [P.], seq 7241:12326, ack 473, win 235, options [nop,nop,TS val 930437539 ecr 930437495], length 5085: HTTP
13:28:08.916660 IP 172.17.0.1.59748 > 172.17.0.2.80: Flags [.], ack 12326, win 421, options [nop,nop,TS val 930437539 ecr 930437539], length 0
13:28:08.917291 IP 172.17.0.2.80 > 172.17.0.1.59748: Flags [F.], seq 12326, ack 473, win 235, options [nop,nop,TS val 930437540 ecr 930437539], length 0
13:28:08.918315 IP 172.17.0.1.59748 > 172.17.0.2.80: Flags [F.], seq 473, ack 12327, win 421, options [nop,nop,TS val 930437541 ecr 930437540], length 0
13:28:08.918328 IP 172.17.0.2.80 > 172.17.0.1.59748: Flags [.], ack 474, win 235, options [nop,nop,TS val 930437541 ecr 930437541], length 0
13:28:09.199619 IP 172.17.0.1.59754 > 172.17.0.2.80: Flags [S], seq 1269147298, win 29200, options [mss 1460,sackOK,TS val 930437822 ecr 0,nop,wscale 7], length 0
13:28:09.199660 IP 172.17.0.2.80 > 172.17.0.1.59754: Flags [S.], seq 3148809447, ack 1269147299, win 28960, options [mss 1460,sackOK,TS val 930437822 ecr 930437822,nop,wscale 7], length 0
13:28:09.199719 IP 172.17.0.1.59754 > 172.17.0.2.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 930437822 ecr 930437822], length 0
13:28:09.199927 IP 172.17.0.1.59754 > 172.17.0.2.80: Flags [P.], seq 1:528, ack 1, win 229, options [nop,nop,TS val 930437822 ecr 930437822], length 527: HTTP: GET /stylesheets/desktop_e45d80f0c537494b37e72cede1e6dd0d0d76e33a.css?__ws=discourse.opensourceecology.org HTTP/1.1
13:28:09.199949 IP 172.17.0.2.80 > 172.17.0.1.59754: Flags [.], ack 528, win 235, options [nop,nop,TS val 930437822 ecr 930437822], length 0
13:28:09.203135 IP 172.17.0.1.59766 > 172.17.0.2.80: Flags [S], seq 1364122195, win 29200, options [mss 1460,sackOK,TS val 930437825 ecr 0,nop,wscale 7], length 0
13:28:09.203174 IP 172.17.0.2.80 > 172.17.0.1.59766: Flags [S.], seq 125163017, ack 1364122196, win 28960, options [mss 1460,sackOK,TS val 930437825 ecr 930437825,nop,wscale 7], length 0
13:28:09.203202 IP 172.17.0.1.59766 > 172.17.0.2.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 930437825 ecr 930437825], length 0
13:28:09.203354 IP 172.17.0.1.59768 > 172.17.0.2.80: Flags [S], seq 3141961263, win 29200, options [mss 1460,sackOK,TS val 930437826 ecr 0,nop,wscale 7], length 0
13:28:09.203377 IP 172.17.0.2.80 > 172.17.0.1.59768: Flags [S.], seq 3571653253, ack 3141961264, win 28960, options [mss 1460,sackOK,TS val 930437826 ecr 930437826,nop,wscale 7], length 0
  1. That 172.17.0.1 & 172.17.0.2 address are part of the reserved ipv4 block 172.16.0.0/12 (172.16.0.0172.31.255.255). I'll just whitelist that whole block for traffic to port 80 on the docker host's iptables rules
root@osestaging1-discourse-ose:/tmp/iptables.bak# cp iptables-save.a iptables-save.b
root@osestaging1-discourse-ose:/tmp/iptables.bak# vim iptables-save.b
root@osestaging1-discourse-ose:/tmp/iptables.bak# cat iptables-save.b
# Generated by xtables-save v1.8.2 on Mon Mar 30 13:24:22 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -s 172.16.0.0/12 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -s 127.0.0.1/32 -j DROP
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -j DROP
-A OUTPUT -s 127.0.0.1/32 -d 127.0.0.1/32 -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -m owner --uid-owner 100 -j ACCEPT
-A OUTPUT -j DROP
COMMIT
# Completed on Mon Mar 30 13:24:22 2020
root@osestaging1-discourse-ose:/tmp/iptables.bak# 
  1. and I updated the iptables template file and the documentation too
[root@osestaging1 discourse]# cat templates/iptables.template.yml 
run:
  - file:
	 path: /etc/runit/1.d/01-iptables
	 chmod: "+x"
	 contents: |
		#!/bin/bash
		################################################################################
		# File:    /etc/runit/1.d/01-iptables
		# Version: 0.3
		# Purpose: installs & locks-down iptables
		# Author:  Michael Altfield <michael@opensourceecology.org>
		# Created: 2019-11-26
		# Updated: 2020-03-30
		################################################################################
		sudo apt-get update
		sudo apt-get install -y iptables 
		sudo iptables -A INPUT -i lo -j ACCEPT
		sudo iptables -A INPUT -s 172.16.0.0/12 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
		sudo iptables -A INPUT -s 127.0.0.1/32 -j DROP
		sudo iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
		sudo iptables -A INPUT -j DROP
		sudo iptables -A OUTPUT -s 127.0.0.1/32 -d 127.0.0.1/32 -j ACCEPT
		sudo iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
		sudo iptables -A OUTPUT -m owner --uid-owner 0 -j ACCEPT
		sudo iptables -A OUTPUT -m owner --uid-owner 100 -j ACCEPT
		sudo iptables -A OUTPUT -j DROP
		sudo ip6tables -A INPUT -i lo -j ACCEPT
		sudo ip6tables -A INPUT -s ::1/128 -j DROP
		sudo ip6tables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
		sudo ip6tables -A INPUT -j DROP
		sudo ip6tables -A OUTPUT -s ::1/128 -d ::1/128 -j ACCEPT
		sudo ip6tables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
		sudo ip6tables -A OUTPUT -m owner --uid-owner 0 -j ACCEPT
		sudo ip6tables -A OUTPUT -m owner --uid-owner 100 -j ACCEPT
		sudo ip6tables -A OUTPUT -j DROP

[root@osestaging1 discourse]# 
  1. ok, nginx -> nginx is working without sockets now and the unattended-upgrades cron is setup in the anacron daily dir. Tomorrow we'll
    1. check to see if unattended-upgrades actually upgraded the sudo package
    2. put varnish between the outer & inner nginx processes

Thr Mar 26, 2020

  1. It looks like the unattended-upgrade cron job didn't run *again* last night
  2. I'm beginning to think that the cron system used by this Discourse docker container doesn't automatically pickup files from the /etc/cron.d/ dir. Let's see if it uses any other cron jobs and how they're setup
[root@osestaging1 templates]# grep -irC4 'cron' /var/discourse/templates/
/var/discourse/templates/postgres.10.template.yml-        pg_basebackup --format=tar --pgdata=- --xlog --gzip --label=$ID > $FILENAME
/var/discourse/templates/postgres.10.template.yml-        echo $FILENAME
/var/discourse/templates/postgres.10.template.yml-
/var/discourse/templates/postgres.10.template.yml-  - file:
/var/discourse/templates/postgres.10.template.yml:     path: /var/spool/cron/crontabs/postgres
/var/discourse/templates/postgres.10.template.yml-     contents: |
/var/discourse/templates/postgres.10.template.yml-        # m h  dom mon dow   command
/var/discourse/templates/postgres.10.template.yml-        #MAILTO=?
/var/discourse/templates/postgres.10.template.yml-        #0 */4 * * * /var/lib/postgresql/take-database-backup
--
/var/discourse/templates/postgres.9.5.template.yml-        pg_basebackup --format=tar --pgdata=- --xlog --gzip --label=$ID > $FILENAME
/var/discourse/templates/postgres.9.5.template.yml-        echo $FILENAME
/var/discourse/templates/postgres.9.5.template.yml-
/var/discourse/templates/postgres.9.5.template.yml-  - file:
/var/discourse/templates/postgres.9.5.template.yml:     path: /var/spool/cron/crontabs/postgres
/var/discourse/templates/postgres.9.5.template.yml-     contents: |
/var/discourse/templates/postgres.9.5.template.yml-        # m h  dom mon dow   command
/var/discourse/templates/postgres.9.5.template.yml-        #MAILTO=?
/var/discourse/templates/postgres.9.5.template.yml-        #0 */4 * * * /var/lib/postgresql/take-database-backup
--
/var/discourse/templates/web.template.yml-  - exec: /usr/local/bin/ruby -e 'if ENV["DISCOURSE_SMTP_ADDRESS"] == "smtp.example.com"; puts "Aborting! Mail is not configured!"; exit 1; end'
/var/discourse/templates/web.template.yml-  - exec: /usr/local/bin/ruby -e 'if ENV["DISCOURSE_HOSTNAME"] == "discourse.example.com"; puts "Aborting! Domain is not configured!"; exit 1; end'
/var/discourse/templates/web.template.yml-  - exec: /usr/local/bin/ruby -e 'if (ENV["DISCOURSE_CDN_URL"] || "")[0..1] == "//"; puts "Aborting! CDN must have a protocol specified. Once fixed you should rebake your posts now to correct all posts."; exit 1; end'
/var/discourse/templates/web.template.yml-  - exec: chown -R discourse /home/discourse
/var/discourse/templates/web.template.yml:  # TODO: move to base image (anacron can not be fired up using rc.d)
/var/discourse/templates/web.template.yml:  - exec: rm -f /etc/cron.d/anacron
/var/discourse/templates/web.template.yml-  - file:
/var/discourse/templates/web.template.yml:     path: /etc/cron.d/anacron
/var/discourse/templates/web.template.yml-     contents: |
/var/discourse/templates/web.template.yml-        SHELL=/bin/sh
/var/discourse/templates/web.template.yml-        PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
/var/discourse/templates/web.template.yml-
/var/discourse/templates/web.template.yml:        30 7    * * *   root  /usr/sbin/anacron -s >/dev/null
/var/discourse/templates/web.template.yml-  - file:
/var/discourse/templates/web.template.yml-     path: /etc/runit/1.d/copy-env
/var/discourse/templates/web.template.yml-     chmod: "+x"
/var/discourse/templates/web.template.yml-     contents: |
--
/var/discourse/templates/web.template.yml-          endscript
/var/discourse/templates/web.template.yml-        }
/var/discourse/templates/web.template.yml-
/var/discourse/templates/web.template.yml-  # move state out of the container this fancy is done to support rapid rebuilds of containers,
/var/discourse/templates/web.template.yml:  # we store anacron and logrotate state outside the container to ensure its maintained across builds
/var/discourse/templates/web.template.yml-  # later move this snipped into an intialization script
/var/discourse/templates/web.template.yml-  # we also ensure all the symlinks we need to /shared are in place in the correct structure
/var/discourse/templates/web.template.yml-  # this allows us to bootstrap on one machine and then run on another
/var/discourse/templates/web.template.yml-  - file:
--
/var/discourse/templates/web.template.yml-          rm -fr /var/lib/logrotate
/var/discourse/templates/web.template.yml-          mkdir -p /shared/state/logrotate
/var/discourse/templates/web.template.yml-          ln -s /shared/state/logrotate /var/lib/logrotate
/var/discourse/templates/web.template.yml-        fi
/var/discourse/templates/web.template.yml:        if  ! -L /var/spool/anacron ; then
/var/discourse/templates/web.template.yml:          rm -fr /var/spool/anacron
/var/discourse/templates/web.template.yml:          mkdir -p /shared/state/anacron-spool
/var/discourse/templates/web.template.yml:          ln -s /shared/state/anacron-spool /var/spool/anacron
/var/discourse/templates/web.template.yml-        fi
/var/discourse/templates/web.template.yml-        if  ! -d /shared/log/rails ; then
/var/discourse/templates/web.template.yml-          mkdir -p /shared/log/rails
/var/discourse/templates/web.template.yml-          chown -R discourse:www-data /shared/log/rails
--
/var/discourse/templates/postgres.template.yml-        pg_basebackup --format=tar --pgdata=- --xlog --gzip --label=$ID > $FILENAME
/var/discourse/templates/postgres.template.yml-        echo $FILENAME
/var/discourse/templates/postgres.template.yml-
/var/discourse/templates/postgres.template.yml-  - file:
/var/discourse/templates/postgres.template.yml:     path: /var/spool/cron/crontabs/postgres
/var/discourse/templates/postgres.template.yml-     contents: |
/var/discourse/templates/postgres.template.yml-        # m h  dom mon dow   command
/var/discourse/templates/postgres.template.yml-        #MAILTO=?
/var/discourse/templates/postgres.template.yml-        #0 */4 * * * /var/lib/postgresql/take-database-backup
--
/var/discourse/templates/web.letsencrypt.ssl.template.yml-
/var/discourse/templates/web.letsencrypt.ssl.template.yml-    - exec:
/var/discourse/templates/web.letsencrypt.ssl.template.yml-       cmd:
/var/discourse/templates/web.letsencrypt.ssl.template.yml-         - cd /root && git clone --branch 2.8.2 --depth 1 https://github.com/Neilpang/acme.sh.git && cd /root/acme.sh
/var/discourse/templates/web.letsencrypt.ssl.template.yml:         - touch /var/spool/cron/crontabs/root
/var/discourse/templates/web.letsencrypt.ssl.template.yml-         - install -d -m 0755 -g root -o root $LETSENCRYPT_DIR
/var/discourse/templates/web.letsencrypt.ssl.template.yml-         - cd /root/acme.sh && LE_WORKING_DIR="${LETSENCRYPT_DIR}" ./acme.sh --install --log "${LETSENCRYPT_DIR}/acme.sh.log"
/var/discourse/templates/web.letsencrypt.ssl.template.yml-         - cd /root/acme.sh && LE_WORKING_DIR="${LETSENCRYPT_DIR}" ./acme.sh --upgrade --auto-upgrade
/var/discourse/templates/web.letsencrypt.ssl.template.yml-
--
/var/discourse/templates/cron.template.yml-run:
/var/discourse/templates/cron.template.yml-  - exec:
/var/discourse/templates/cron.template.yml:      hook: cron
/var/discourse/templates/cron.template.yml:      cmd: echo cron is now included in base image, remove from templates
--
/var/discourse/templates/unattended-upgrades.template.yml-run:
/var/discourse/templates/unattended-upgrades.template.yml-  - file:
/var/discourse/templates/unattended-upgrades.template.yml:     path: /etc/cron.d/unattended-upgrades
/var/discourse/templates/unattended-upgrades.template.yml-     contents: |+
/var/discourse/templates/unattended-upgrades.template.yml-        ################################################################################
/var/discourse/templates/unattended-upgrades.template.yml:        # File:    /etc/cron.d/unattended-upgrades
/var/discourse/templates/unattended-upgrades.template.yml-        # Version: 0.1
/var/discourse/templates/unattended-upgrades.template.yml-        # Purpose: run unattended-upgrades in lieu of systemd. For more info see
/var/discourse/templates/unattended-upgrades.template.yml-        #           * https://wiki.opensourceecology.org/wiki/Discourse
/var/discourse/templates/unattended-upgrades.template.yml-        #           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
--
/var/discourse/templates/unattended-upgrades.template.yml-        ################################################################################
/var/discourse/templates/unattended-upgrades.template.yml-        20 04 * * * root /usr/bin/nice /usr/bin/unattended-upgrades --debug
/var/discourse/templates/unattended-upgrades.template.yml-
/var/discourse/templates/unattended-upgrades.template.yml-
/var/discourse/templates/unattended-upgrades.template.yml:  - exec: /bin/echo -e "\n" >> /etc/cron.d/unattended-upgrades
--
/var/discourse/templates/postgres.9.3.template.yml-        pg_basebackup --format=tar --pgdata=- --xlog --gzip --label=$ID > $FILENAME
/var/discourse/templates/postgres.9.3.template.yml-        echo $FILENAME
/var/discourse/templates/postgres.9.3.template.yml-
/var/discourse/templates/postgres.9.3.template.yml-  - file:
/var/discourse/templates/postgres.9.3.template.yml:     path: /var/spool/cron/crontabs/postgres
/var/discourse/templates/postgres.9.3.template.yml-     contents: |
/var/discourse/templates/postgres.9.3.template.yml-        # m h  dom mon dow   command
/var/discourse/templates/postgres.9.3.template.yml-        #MAILTO=?
/var/discourse/templates/postgres.9.3.template.yml-        #0 */4 * * * /var/lib/postgresql/take-database-backup
--
/var/discourse/templates/import/phpbb3.template.yml-        cd: /etc/service
/var/discourse/templates/import/phpbb3.template.yml-        cmd:
/var/discourse/templates/import/phpbb3.template.yml-          - rm -R unicorn
/var/discourse/templates/import/phpbb3.template.yml-          - rm -R nginx
/var/discourse/templates/import/phpbb3.template.yml:          - rm -R cron
/var/discourse/templates/import/phpbb3.template.yml-
/var/discourse/templates/import/phpbb3.template.yml-    - exec:
/var/discourse/templates/import/phpbb3.template.yml-        cd: /etc/runit/3.d
/var/discourse/templates/import/phpbb3.template.yml-        cmd:
[root@osestaging1 templates]# 
  1. so it appears that an existing 'postgres' cron lives in /var/spool/cron/crontabs
root@osestaging1-discourse-ose:/etc/runit# ls -lah /var/spool/cron/crontabs/
total 12K
drwx-wx--T. 1 root crontab 4.0K Mar 25 09:37 .
drwxr-xr-x. 1 root root    4.0K Mar 16 07:24 ..
-rw-r--r--. 1 root root      93 Mar 25 09:37 postgres
root@osestaging1-discourse-ose:/etc/runit# cat /var/spool/cron/crontabs/postgres 
# m h  dom mon dow   command
#MAILTO=?
#0 */4 * * * /var/lib/postgresql/take-database-backup
root@osestaging1-discourse-ose:/etc/runit# 
  1. but, at the same time, the web.template.yml creates a /etc/cron.d/anacron job (a cron for starting a cron?). It does define a SHELL and PATH env vars, though--which I'm not doing
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/cron.d/anacron 
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

30 7    * * *   root    /usr/sbin/anacron -s >/dev/null
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. that template also sets-up the anacron "state" in the shared dir so that rebuild doesn't cause previous anacron "state" to be lost. Looks like that "state" is empty, though. Maybe this is where I should put my cron job? In the 'anacron-spool/cron.daily/' dir?
[root@osestaging1 templates]# ls -lah ../shared/standalone/state/anacron-spool/
total 20K
drwxr-xr-x. 2 root root 4.0K Mar 16 16:24 .
drwxr-xr-x. 4 root root 4.0K Mar 16 16:23 ..
-rw-------. 1 root root    9 Mar 25 10:10 cron.daily
-rw-------. 1 root root    9 Mar 16 16:39 cron.monthly
-rw-------. 1 root root    9 Mar 23 11:26 cron.weekly
[root@osestaging1 templates]# find ../shared/standalone/state/anacron-spool/
../shared/standalone/state/anacron-spool/
../shared/standalone/state/anacron-spool/cron.monthly
../shared/standalone/state/anacron-spool/cron.weekly
../shared/standalone/state/anacron-spool/cron.daily
[root@osestaging1 templates]# 
  1. oh, this is a great example: they created a file '/var/spool/cron/crontabs/root' to execute the letsencrypt renewal scripts. Note that I don't use this in the container as our arch requries nginx to terminate https before varnish (and before nginx outside the container and before nginx inside the container)
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /var/spool/cron/crontabs/
total 12K
drwx-wx--T. 1 root crontab 4.0K Mar 25 09:37 .
drwxr-xr-x. 1 root root    4.0K Mar 16 07:24 ..
-rw-r--r--. 1 root root      93 Mar 25 09:37 postgres
root@osestaging1-discourse-ose:/etc/cron.d# 
    1. strange, though. All it does is *touch* it
[root@osestaging1 templates]# grep -irC5 '/var/spool/cron/crontabs/root'
web.letsencrypt.ssl.template.yml-         - /bin/bash -c "if  ! \"$LETSENCRYPT_ACCOUNT_EMAIL\" =~ ([^@]+)@([^\.]+) ; then echo \"LETSENCRYPT_ACCOUNT_EMAIL is not a valid email address\"; exit 1; fi"
web.letsencrypt.ssl.template.yml-
web.letsencrypt.ssl.template.yml-    - exec:
web.letsencrypt.ssl.template.yml-       cmd:
web.letsencrypt.ssl.template.yml-         - cd /root && git clone --branch 2.8.2 --depth 1 https://github.com/Neilpang/acme.sh.git && cd /root/acme.sh
web.letsencrypt.ssl.template.yml:         - touch /var/spool/cron/crontabs/root
web.letsencrypt.ssl.template.yml-         - install -d -m 0755 -g root -o root $LETSENCRYPT_DIR
web.letsencrypt.ssl.template.yml-         - cd /root/acme.sh && LE_WORKING_DIR="${LETSENCRYPT_DIR}" ./acme.sh --install --log "${LETSENCRYPT_DIR}/acme.sh.log"
web.letsencrypt.ssl.template.yml-         - cd /root/acme.sh && LE_WORKING_DIR="${LETSENCRYPT_DIR}" ./acme.sh --upgrade --auto-upgrade
web.letsencrypt.ssl.template.yml-
web.letsencrypt.ssl.template.yml-    - file:
[root@osestaging1 templates]# 
  1. there's a note that this may have been moved to the base image, so let's check the docker build and git repo
[root@osestaging1 image]# grep -irC4 cron /var/discourse/image/
/var/discourse/image/base/Dockerfile-                       libssl-dev libyaml-dev libtool \
/var/discourse/image/base/Dockerfile-                       libxml2-dev gawk parallel \
/var/discourse/image/base/Dockerfile-                       postgresql-${PG_MAJOR} postgresql-client-${PG_MAJOR} \
/var/discourse/image/base/Dockerfile-                       postgresql-contrib-${PG_MAJOR} libpq-dev libreadline-dev \
/var/discourse/image/base/Dockerfile:                       cron anacron \
/var/discourse/image/base/Dockerfile-                       psmisc rsyslog vim whois brotli libunwind-dev \
/var/discourse/image/base/Dockerfile-                       libtcmalloc-minimal4 cmake
/var/discourse/image/base/Dockerfile:RUN sed -i -e 's/start -q anacron/anacron -s/' /etc/cron.d/anacron
/var/discourse/image/base/Dockerfile-RUN sed -i.bak 's/$ModLoad imklog/#$ModLoad imklog/' /etc/rsyslog.conf
/var/discourse/image/base/Dockerfile-RUN dpkg-divert --local --rename --add /sbin/initctl
/var/discourse/image/base/Dockerfile-RUN sh -c "test -f /sbin/initctl || ln -s /bin/true /sbin/initctl"
/var/discourse/image/base/Dockerfile-RUN apt -y install openssh-server
--
/var/discourse/image/base/Dockerfile-RUN mkdir -p /etc/runit/3.d
/var/discourse/image/base/Dockerfile-
/var/discourse/image/base/Dockerfile-ADD runit-1 /etc/runit/1
/var/discourse/image/base/Dockerfile-ADD runit-1.d-cleanup-pids /etc/runit/1.d/cleanup-pids
/var/discourse/image/base/Dockerfile:ADD runit-1.d-anacron /etc/runit/1.d/anacron
/var/discourse/image/base/Dockerfile-ADD runit-1.d-00-fix-var-logs /etc/runit/1.d/00-fix-var-logs
/var/discourse/image/base/Dockerfile-ADD runit-2 /etc/runit/2
/var/discourse/image/base/Dockerfile-ADD runit-3 /etc/runit/3
/var/discourse/image/base/Dockerfile-ADD boot /sbin/boot
/var/discourse/image/base/Dockerfile-
/var/discourse/image/base/Dockerfile:ADD cron /etc/service/cron/run
/var/discourse/image/base/Dockerfile-ADD rsyslog /etc/service/rsyslog/run
/var/discourse/image/base/Dockerfile:ADD cron.d_anacron /etc/cron.d/anacron
/var/discourse/image/base/Dockerfile-
/var/discourse/image/base/Dockerfile-# Discourse specific bits
/var/discourse/image/base/Dockerfile-RUN useradd discourse -s /bin/bash -m -U &&\
/var/discourse/image/base/Dockerfile-    mkdir -p /var/www &&\
--
/var/discourse/image/base/cron.d_anacron-
/var/discourse/image/base/cron.d_anacron-SHELL=/bin/sh
/var/discourse/image/base/cron.d_anacron-PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
/var/discourse/image/base/cron.d_anacron-
/var/discourse/image/base/cron.d_anacron:30 7    * * *   root   /usr/sbin/anacron -s >/dev/null
--
/var/discourse/image/base/cron-#!/bin/bash
/var/discourse/image/base/cron-exec 2>&1
/var/discourse/image/base/cron-cd /
/var/discourse/image/base/cron:exec cron -f
--
/var/discourse/image/base/runit-1.d-anacron-#!/bin/bash
/var/discourse/image/base/runit-1.d-anacron:/usr/sbin/anacron -s
[root@osestaging1 image]# 
  1. so it looks like anacron is chosen for systems that are not online 24/7, but still want to be able to run a less-frequent job like once-a-month even if that window passed. Here's the system's anacrontab file
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/anacrontab 
# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root
LOGNAME=root

# These replace cron's entries
1       5       cron.daily      run-parts --report /etc/cron.daily
7       10      cron.weekly     run-parts --report /etc/cron.weekly
@monthly        15      cron.monthly    run-parts --report /etc/cron.monthly
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. so it looks like you can just drop a script directly into that cron.daily dir; here's a sampling
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /etc/cron.daily
total 48K
drwxr-xr-x. 1 root root 4.0K Mar 16 07:26 .
drwxr-xr-x. 1 root root 4.0K Mar 25 11:03 ..
-rwxr-xr-x. 1 root root  311 May 19  2019 0anacron
-rwxr-xr-x. 1 root root 1.5K May 28  2019 apt-compat
-rwxr-xr-x. 1 root root 1.2K Apr 19  2019 dpkg
-rwxr-xr-x. 1 root root 4.1K Sep 27 16:07 exim4-base
-rwxr-xr-x. 1 root root  377 Aug 28  2018 logrotate
-rwxr-xr-x. 1 root root  249 Sep 27  2017 passwd
-rw-r--r--. 1 root root  102 Oct 11 07:58 .placeholder
-rwxr-xr-x. 1 root root  441 Apr  6  2019 sysstat
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/cron.daily/0anacron 
#!/bin/sh
#
# anacron's cron script
#
# This script updates anacron time stamps. It is called through run-parts
# either by anacron itself or by cron.
#
# The script is called "0anacron" to assure that it will be executed
# _before_ all other scripts.

test -x /usr/sbin/anacron || exit 0
anacron -u cron.daily
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/cron.daily/logrotate 
#!/bin/sh

# skip in favour of systemd timer
if [ -d /run/systemd/system ]; then
	exit 0
fi

# this cronjob persists removals (but not purges)
if [ ! -x /usr/sbin/logrotate ]; then
	exit 0
fi

/usr/sbin/logrotate /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
	/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit $EXITVALUE
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/cron.daily/.placeholder 
# DO NOT EDIT OR REMOVE
# This file is a simple placeholder to keep dpkg from removing this directory
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. the one change that this system appears to make to the `anacron` job is the addition of the '-s' argument, causing the jobs to be run serially (waiting for previous jobs to finish before calling another one)
  2. anacron may email reports, but I don't see any emails on this container
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /var/spool/mail
lrwxrwxrwx. 1 root root 7 Feb 24 00:00 /var/spool/mail -> ../mail
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /var/spool/mail/
total 12K
drwxrwsr-x. 2 root mail 4.0K Feb 24 00:00 .
drwxr-xr-x. 1 root root 4.0K Mar 25 09:38 ..
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /var/mail
total 12K
drwxrwsr-x. 2 root mail 4.0K Feb 24 00:00 .
drwxr-xr-x. 1 root root 4.0K Mar 25 09:38 ..
root@osestaging1-discourse-ose:/etc/cron.d
  1. oh, right! The "state" of anacron then is the list of cron jobs that have been executed and when so that anacron knows if, for example, a "monthly" job has been executed or not. again, it's empty for now
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /var/spool/anacron
lrwxrwxrwx. 1 root root 27 Mar 25 09:44 /var/spool/anacron -> /shared/state/anacron-spool
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /var/spool/anacron/
total 20K
drwxr-xr-x. 2 root root 4.0K Mar 16 16:24 .
drwxr-xr-x. 4 root root 4.0K Mar 16 16:23 ..
-rw-------. 1 root root    9 Mar 25 10:10 cron.daily
-rw-------. 1 root root    9 Mar 16 16:39 cron.monthly
-rw-------. 1 root root    9 Mar 23 11:26 cron.weekly
root@osestaging1-discourse-ose:/etc/cron.d# ls -lah /var/spool/anacron/*
-rw-------. 1 root root 9 Mar 25 10:10 /var/spool/anacron/cron.daily
-rw-------. 1 root root 9 Mar 16 16:39 /var/spool/anacron/cron.monthly
-rw-------. 1 root root 9 Mar 23 11:26 /var/spool/anacron/cron.weekly
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. anacron doesn't actually appear to be running on the container, though
root@osestaging1-discourse-ose:/etc/cron.d# ps -ef | grep -i anacron
root     11629  7899  0 05:21 pts/2    00:00:00 grep -i anacron
root@osestaging1-discourse-ose:/etc/cron.d# 
    1. even though Discourse attempts to start it both with runsv and with a cron job
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/runit/1.d/anacron
#!/bin/bash
/usr/sbin/anacron -s
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/cron.d/anacron 
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

30 7    * * *   root    /usr/sbin/anacron -s >/dev/null
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. I confirmed that executing `anacron -s` *does* kick-off an anacron daemon process, though
root@osestaging1-discourse-ose:/etc/cron.d# anacron -s
root@osestaging1-discourse-ose:/etc/cron.d# ps -ef | grep -i anacron
root     11904     0  0 05:25 ?        00:00:00 anacron -s
root     11910  7899  0 05:25 pts/2    00:00:00 grep -i anacron
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. something that *is* running by default is `cron -f`
root@osestaging1-discourse-ose:/etc/cron.d# ps -ef | grep -i cron
root       723   717  0 Mar25 ?        00:00:00 runsv cron
root       728   723  0 Mar25 ?        00:00:00 cron -f
root     11904     0  0 05:25 ?        00:00:00 anacron -s
root     12185  7899  0 05:29 pts/2    00:00:00 grep -i cron
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/service/cron/
run        supervise/
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/service/cron/run
#!/bin/bash
exec 2>&1
cd /
exec cron -f
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. It's getting frustrating not having the man pages on the debian system to reference. I can read the ones in cent, but they may not be the same. Installing `man` doesn't install man pages for existing packages, only new ones. Here's the solution https://unix.stackexchange.com/questions/79028/debian-install-missing-man-pages
sudo apt-get install man
sudo apt-get install --reinstall -y --ignore-missing cron anacron
  1. ugh, that still didn't work. Anyway, it looks like `cron -f` makes it run in the foreground
  2. well, somehow in playing with the above commands I managed to kickoff the cron, but I guess it failed to run `unattended-upgrades` becuase I was doing something else with apt-get (maybe trying to install man pages
==> unattended-upgrades/unattended-upgrades.log <==
2020-03-26 05:56:45,291 INFO Initial whitelist: 
2020-03-26 05:56:45,291 INFO Starting unattended upgrades script
2020-03-26 05:56:45,293 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
2020-03-26 05:56:45,295 ERROR Lock could not be acquired (another package manager running?)
2020-03-26 05:56:46,098 INFO Checking if system is running on battery is skipped. Please install powermgmt-base package to check power status and skip installing updates when the system is running on battery.
2020-03-26 05:56:46,108 INFO Initial blacklist : 
2020-03-26 05:56:46,109 INFO Initial whitelist: 
2020-03-26 05:56:46,109 INFO Starting unattended upgrades script
2020-03-26 05:56:46,110 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
2020-03-26 05:56:46,111 ERROR Lock could not be acquired (another package manager running?)
  1. anyway, I'll just leave it until tomorrow. If it works, then it tells me that I might just need to do a `sv restart cron` or something after putting the /etc/cron.d/unattended-upgrades file in-place or something

Wed Mar 25, 2020

  1. It looks like the unattended-upgrade cron job didn't run again last night
  2. a manual dry run shows that it still *would* update the sudo package if it was kicked-off by the cron
root@osestaging1-discourse-ose:/etc/cron.d# unattended-upgrades --dry-run --debug
Checking if system is running on battery is skipped. Please install powermgmt-base package to check power status and skip installing updates when the system is running on battery.
Initial blacklist : 
Initial whitelist: 
Starting unattended upgrades script
Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
Using (^linux-image-[0-9]+\.[0-9\.]+-.*|^linux-headers-[0-9]+\.[0-9\.]+-.*|^linux-image-extra-[0-9]+\.[0-9\.]+-.*|^linux-modules-[0-9]+\.[0-9\.]+-.*|^linux-modules-extra-[0-9]+\.[0-9\.]+-.*|^linux-signed-image-[0-9]+\.[0-9\.]+-.*|^linux-image-unsigned-[0-9]+\.[0-9\.]+-.*|^kfreebsd-image-[0-9]+\.[0-9\.]+-.*|^kfreebsd-headers-[0-9]+\.[0-9\.]+-.*|^gnumach-image-[0-9]+\.[0-9\.]+-.*|^.*-modules-[0-9]+\.[0-9\.]+-.*|^.*-kernel-[0-9]+\.[0-9\.]+-.*|^linux-backports-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-tools-[0-9]+\.[0-9\.]+-.*|^linux-cloud-tools-[0-9]+\.[0-9\.]+-.*|^linux-buildinfo-[0-9]+\.[0-9\.]+-.*|^linux-source-[0-9]+\.[0-9\.]+-.*) regexp to find kernel packages
Using (^linux-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-signed-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-unsigned-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^gnumach-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-kernel-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-backports-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-cloud-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-buildinfo-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-source-3\.10\.0\-957\.21\.3\.el7\.x86_64$) regexp to find running kernel packages
Checking: sudo ([<Origin component:'main' archive:'stable' origin:'Debian' label:'Debian' site:'deb.debian.org' isTrusted:True>])
pkgs that look like they should be upgraded: sudo
Get:1 http://deb.debian.org/debian buster/main amd64 sudo amd64 1.8.27-1+deb10u2 [1245 kB]                                                        
Fetched 1245 kB in 0s (0 B/s)                                                                                                                     
fetch.run() result: 0
<apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 0 IsTrusted: 1 FileSize: 1244824 DestFile:'/var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb' DescURI: 'http://deb.debian.org/debian/pool/main/s/sudo/sudo_1.8.27-1+deb10u2_amd64.deb' ID:1 ErrorText: ''>
check_conffile_prompt(/var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb)
found pkg: sudo
conffile line: /etc/init.d/sudo 1153f6e6fa7c0e2166779df6ad43f1a8
current md5: 1153f6e6fa7c0e2166779df6ad43f1a8
conffile line: /etc/pam.d/sudo 85da64f888739f193fc0fa896680030e
current md5: 85da64f888739f193fc0fa896680030e
conffile line: /etc/sudoers 45437b4e86fba2ab890ac81db2ec3606
current md5: 45437b4e86fba2ab890ac81db2ec3606
conffile line: /etc/sudoers.d/README 8d3cf36d1713f40a0ddc38e1b21a51b6
current md5: 8d3cf36d1713f40a0ddc38e1b21a51b6
blacklist: []
whitelist: []
Option --dry-run given, *not* performing real actions
Packages that will be upgraded: sudo
Writing dpkg log to /var/log/unattended-upgrades/unattended-upgrades-dpkg.log
applying set ['sudo']
debconf: delaying package configuration, since apt-utils is not installed
/usr/bin/dpkg --status-fd 9 --no-triggers --unpack --auto-deconfigure /var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb 
/usr/bin/dpkg --status-fd 9 --configure --pending 
left to upgrade set()
All upgrades installed
InstCount=0 DelCount=0 BrokenCount=0
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. I'm not sure if maybe it's the '--debug' flag, so let me try to just copy & paste the exact command from the cron and see if it actually *does* upgrade sudo. yikes; that's a problem!
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/cron.d/unattended-upgrades 
################################################################################
# File:    /etc/cron.d/unattended-upgrades
# Version: 0.1
# Purpose: run unattended-upgrades in lieu of systemd. For more info see
#           * https://wiki.opensourceecology.org/wiki/Discourse
#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2020-03-23
# Updated: 2020-03-23
################################################################################
20 04 * * * root /bin/nice /usr/bin/unattended-upgrades --debug




root@osestaging1-discourse-ose:/etc/cron.d# /bin/nice /usr/bin/unattended-upgrades --debug
bash: /bin/nice: No such file or directory
root@osestaging1-discourse-ose:/etc/cron.d# which nice
/usr/bin/nice
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. fixing the path to `nice` does in-fact result in the sudo package being updated
root@osestaging1-discourse-ose:/etc/cron.d# /usr/bin/nice /usr/bin/unattended-upgrades --debug
Checking if system is running on battery is skipped. Please install powermgmt-base package to check power status and skip installing updates when t
he system is running on battery.
Initial blacklist :
Initial whitelist:
Starting unattended upgrades script
Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
Using (^linux-image-[0-9]+\.[0-9\.]+-.*|^linux-headers-[0-9]+\.[0-9\.]+-.*|^linux-image-extra-[0-9]+\.[0-9\.]+-.*|^linux-modules-[0-9]+\.[0-9\.]+-.*|^linux-modules-extra-[0-9]+\.[0-9\.]+-.*|^linux-signed-image-[0-9]+\.[0-9\.]+-.*|^linux-image-unsigned-[0-9]+\.[0-9\.]+-.*|^kfreebsd-image-[0-9]+\.[0-9\.]+-.*|^kfreebsd-headers-[0-9]+\.[0-9\.]+-.*|^gnumach-image-[0-9]+\.[0-9\.]+-.*|^.*-modules-[0-9]+\.[0-9\.]+-.*|^.*-kernel-[0-9]+\.[0-9\.]+-.*|^linux-backports-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-tools-[0-9]+\.[0-9\.]+-.*|^linux-cloud-tools-[0-9]+\.[0-9\.]+-.*|^linux-buildinfo-[0-9]+\.[0-9\.]+-.*|^linux-source-[0-9]+\.[0-9\.]+-.*) regexp to find kernel packages
Using (^linux-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-signed-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-unsigned-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^gnumach-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-kernel-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-backports-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-cloud-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-buildinfo-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-source-3\.10\.0\-957\.21\.3\.el7\.x86_64$) regexp to find running kernel packages
Checking: sudo ([<Origin component:'main' archive:'stable' origin:'Debian' label:'Debian' site:'deb.debian.org' isTrusted:True>])
pkgs that look like they should be upgraded: sudo
Get:1 http://deb.debian.org/debian buster/main amd64 sudo amd64 1.8.27-1+deb10u2 [1245 kB]
Fetched 1245 kB in 0s (0 B/s)
fetch.run() result: 0
<apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 0 IsTrusted: 1 FileSize: 1244824 DestFile:'/var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb' DescURI: 'http://deb.debian.org/debian/pool/main/s/sudo/sudo_1.8.27-1+deb10u2_amd64.deb' ID:1 ErrorText: ''>
check_conffile_prompt(/var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb)
found pkg: sudo
conffile line: /etc/init.d/sudo 1153f6e6fa7c0e2166779df6ad43f1a8
current md5: 1153f6e6fa7c0e2166779df6ad43f1a8
conffile line: /etc/pam.d/sudo 85da64f888739f193fc0fa896680030e
current md5: 85da64f888739f193fc0fa896680030e
conffile line: /etc/sudoers 45437b4e86fba2ab890ac81db2ec3606
current md5: 45437b4e86fba2ab890ac81db2ec3606
conffile line: /etc/sudoers.d/README 8d3cf36d1713f40a0ddc38e1b21a51b6
current md5: 8d3cf36d1713f40a0ddc38e1b21a51b6
blacklist: []
whitelist: []
Packages that will be upgraded: sudo
Writing dpkg log to /var/log/unattended-upgrades/unattended-upgrades-dpkg.log
applying set ['sudo']
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 48062 files and directories currently installed.)
Preparing to unpack .../sudo_1.8.27-1+deb10u2_amd64.deb ...
Unpacking sudo (1.8.27-1+deb10u2) over (1.8.27-1+deb10u1) ...
Setting up sudo (1.8.27-1+deb10u2) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Processing triggers for systemd (241-7~deb10u3) ...
left to upgrade set()
All upgrades installed
InstCount=0 DelCount=0 BrokenCount=0
Extracting content from /var/log/unattended-upgrades/unattended-upgrades-dpkg.log since 2020-03-25 08:06:15
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. that above run wrote out these log entries
==> /var/log/syslog <==
Mar 25 08:05:01 osestaging1-discourse-ose CRON[30407]: Cannot make/remove an entry for the specified session

==> /var/log/unattended-upgrades/unattended-upgrades.log <==
2020-03-25 08:06:15,947 INFO Checking if system is running on battery is skipped. Please install powermgmt-base package to check power status and s
kip installing updates when the system is running on battery.
2020-03-25 08:06:15,954 INFO Initial blacklist :
2020-03-25 08:06:15,954 INFO Initial whitelist:
2020-03-25 08:06:15,955 INFO Starting unattended upgrades script
2020-03-25 08:06:15,955 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
2020-03-25 08:06:17,310 DEBUG Using (^linux-image-[0-9]+\.[0-9\.]+-.*|^linux-headers-[0-9]+\.[0-9\.]+-.*|^linux-image-extra-[0-9]+\.[0-9\.]+-.*|^li
nux-modules-[0-9]+\.[0-9\.]+-.*|^linux-modules-extra-[0-9]+\.[0-9\.]+-.*|^linux-signed-image-[0-9]+\.[0-9\.]+-.*|^linux-image-unsigned-[0-9]+\.[0-9
\.]+-.*|^kfreebsd-image-[0-9]+\.[0-9\.]+-.*|^kfreebsd-headers-[0-9]+\.[0-9\.]+-.*|^gnumach-image-[0-9]+\.[0-9\.]+-.*|^.*-modules-[0-9]+\.[0-9\.]+-.*|^.*-kernel-[0-9]+\.[0-9\.]+-.*|^linux-backports-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-tools-[0-9]+\.[0-9\.]+-.*|^linux-cloud-tools-[0-9]+\.[0-9\.]+-.*|^linux-buildinfo-[0-9]+\.[0-9\.]+-.*|^linux-source-[0-9]+\.[0-9\.]+-.*) regexp to find kernel packages
2020-03-25 08:06:17,320 DEBUG Using (^linux-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-signed-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-unsigned-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^gnumach-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-kernel-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-backports-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-cloud-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-buildinfo-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-source-3\.10\.0\-957\.21\.3\.el7\.x86_64$) regexp to find running kernel packages
2020-03-25 08:06:18,195 DEBUG Checking: sudo ([<Origin component:'main' archive:'stable' origin:'Debian' label:'Debian' site:'deb.debian.org' isTrusted:True>])
2020-03-25 08:06:18,815 DEBUG pkgs that look like they should be upgraded: sudo
2020-03-25 08:06:19,016 DEBUG fetch.run() result: 0
2020-03-25 08:06:19,313 DEBUG <apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 0 IsTrusted: 1 FileSize: 1244824 DestFile:'/var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb' DescURI: 'http://deb.debian.org/debian/pool/main/s/sudo/sudo_1.8.27-1+deb10u2_amd64.deb' ID:1 ErrorText: ''>
2020-03-25 08:06:19,313 DEBUG check_conffile_prompt(/var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb)
2020-03-25 08:06:19,318 DEBUG found pkg: sudo
2020-03-25 08:06:19,320 DEBUG conffile line: /etc/init.d/sudo 1153f6e6fa7c0e2166779df6ad43f1a8
2020-03-25 08:06:19,320 DEBUG current md5: 1153f6e6fa7c0e2166779df6ad43f1a8
2020-03-25 08:06:19,320 DEBUG conffile line: /etc/pam.d/sudo 85da64f888739f193fc0fa896680030e
2020-03-25 08:06:19,320 DEBUG current md5: 85da64f888739f193fc0fa896680030e
2020-03-25 08:06:19,320 DEBUG conffile line: /etc/sudoers 45437b4e86fba2ab890ac81db2ec3606
2020-03-25 08:06:19,321 DEBUG current md5: 45437b4e86fba2ab890ac81db2ec3606
2020-03-25 08:06:19,321 DEBUG conffile line: /etc/sudoers.d/README 8d3cf36d1713f40a0ddc38e1b21a51b6
2020-03-25 08:06:19,321 DEBUG current md5: 8d3cf36d1713f40a0ddc38e1b21a51b6
2020-03-25 08:06:19,321 DEBUG blacklist: []
2020-03-25 08:06:19,322 DEBUG whitelist: []
2020-03-25 08:06:19,322 INFO Packages that will be upgraded: sudo
2020-03-25 08:06:19,322 INFO Writing dpkg log to /var/log/unattended-upgrades/unattended-upgrades-dpkg.log
2020-03-25 08:06:19,552 DEBUG applying set ['sudo']

==> /var/log/unattended-upgrades/unattended-upgrades-dpkg.log <==
Log started: 2020-03-25  08:06:19
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 48062 files and directories currently installed.)
Preparing to unpack .../sudo_1.8.27-1+deb10u2_amd64.deb ...
Unpacking sudo (1.8.27-1+deb10u2) over (1.8.27-1+deb10u1) ...
Setting up sudo (1.8.27-1+deb10u2) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Processing triggers for systemd (241-7~deb10u3) ...
Log ended: 2020-03-25  08:06:22


==> /var/log/unattended-upgrades/unattended-upgrades.log <==
2020-03-25 08:06:23,567 DEBUG left to upgrade set()
2020-03-25 08:06:23,567 INFO All upgrades installed
2020-03-25 08:06:23,951 DEBUG InstCount=0 DelCount=0 BrokenCount=0
2020-03-25 08:06:23,953 DEBUG Extracting content from /var/log/unattended-upgrades/unattended-upgrades-dpkg.log since 2020-03-25 08:06:15
  1. ah, well, it's clear why I fucked this up. In RHEL/Cent it's '/bin/nice' while in Debian (which the Discourse container is based-on) it's '/usr/bin/nice'
[root@osestaging1 templates]# which nice
/bin/nice
[root@osestaging1 templates]# /var/discourse/launcher enter discourse_ose
root@osestaging1-discourse-ose:/var/www/discourse# which nice
/usr/bin/nice
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. I updated the template file that creates the unattended-upgrades cron file, rebuilt the discourse container, and downgraded sudo again. https://wiki.opensourceecology.org/wiki/Discourse#unattended-upgrades

Tue Mar 24, 2020

  1. my cron to execute the unattended-upgrades doesn't appear to have executed. I assume it's because the newline was absent at the end (which I can't find anything on the Internet indicating if that's an issue limited just to crontab or also to /etc/cron.d/* files)
  2. I decided to run a quick test by adding two cron files; one with the newline and one without
root@osestaging1-discourse-ose:/etc/cron.d# cat unattended-upgrades
################################################################################
# File:    /etc/cron.d/unattended-upgrades
# Version: 0.1
# Purpose: run unattended-upgrades in lieu of systemd. For more info see
#           * https://wiki.opensourceecology.org/wiki/Discourse
#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2020-03-23
# Updated: 2020-03-23
################################################################################
20 04 * * * root /bin/nice /usr/bin/unattended-upgrades --debugroot@osestaging1-discourse-ose:/etc/cron.d# 
root@osestaging1-discourse-ose:/etc/cron.d# 
root@osestaging1-discourse-ose:/etc/cron.d# echo -n '* * * * * root echo "no newline" >> /tmp/crontest.log ' > nonewline.cron
root@osestaging1-discourse-ose:/etc/cron.d# echo '* * * * * root echo "yes newline" >> /tmp/crontest.log' > yesnewline.cron
root@osestaging1-discourse-ose:/etc/cron.d# cat yesnewline.cron 
* * * * * root echo "yes newline" >> /tmp/crontest.log
root@osestaging1-discourse-ose:/etc/cron.d# cat nonewline.cron 
* * * * * root echo "no newline" >> /tmp/crontest.log root@osestaging1-discourse-ose:/etc/cron.d# 
root@osestaging1-discourse-ose:/etc/cron.d# tail /var/log/messages
Mar 23 11:47:09 osestaging1-discourse-ose rsyslogd:  [origin software="rsyslogd" swVersion="8.1901.0" x-pid="728" x-info="https://www.rsyslog.com"] start
Mar 23 12:00:32 osestaging1-discourse-ose rsyslogd:  [origin software="rsyslogd" swVersion="8.1901.0" x-pid="729" x-info="https://www.rsyslog.com"] start
Mar 23 12:11:07 osestaging1-discourse-ose rsyslogd:  [origin software="rsyslogd" swVersion="8.1901.0" x-pid="726" x-info="https://www.rsyslog.com"] start
Mar 23 12:28:16 osestaging1-discourse-ose rsyslogd:  [origin software="rsyslogd" swVersion="8.1901.0" x-pid="729" x-info="https://www.rsyslog.com"] start
root@osestaging1-discourse-ose:/etc/cron.d#
  1. but instead of seeing a new file at /tmp/crontest.log, I'm seeing these entries in the syslog
root@osestaging1-discourse-ose:/etc/cron.d# tail /var/log/syslog
Mar 24 11:35:01 osestaging1-discourse-ose CRON[24378]: Cannot make/remove an entry for the specified session
Mar 24 11:45:01 osestaging1-discourse-ose CRON[25020]: Cannot make/remove an entry for the specified session
Mar 24 11:55:01 osestaging1-discourse-ose CRON[25655]: Cannot make/remove an entry for the specified session
Mar 24 12:05:01 osestaging1-discourse-ose CRON[26290]: Cannot make/remove an entry for the specified session
Mar 24 12:15:01 osestaging1-discourse-ose CRON[26928]: Cannot make/remove an entry for the specified session
Mar 24 12:17:01 osestaging1-discourse-ose CRON[27056]: Cannot make/remove an entry for the specified session
Mar 24 12:25:01 osestaging1-discourse-ose CRON[27570]: Cannot make/remove an entry for the specified session
Mar 24 12:35:02 osestaging1-discourse-ose CRON[28214]: Cannot make/remove an entry for the specified session
Mar 24 12:38:01 osestaging1-discourse-ose cron[728]: (postgres) INSECURE MODE (mode 0600 expected) (crontabs/postgres)
Mar 24 12:39:01 osestaging1-discourse-ose cron[728]: (postgres) INSECURE MODE (mode 0600 expected) (crontabs/postgres)
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. I restarted the cron daemon using svrun, and I got the error message I was looking for, telling me that my 'unattended-upgrades' package is missing the newline before EOF
root@osestaging1-discourse-ose:/var/www/discourse# tail /var/log/syslog
Mar 24 12:57:58 osestaging1-discourse-ose anacron[29739]: Updated timestamp for job `cron.daily' to 2020-03-24
Mar 24 13:01:04 osestaging1-discourse-ose anacron[29414]: Received SIGUSR1
Mar 24 13:02:05 osestaging1-discourse-ose cron[30033]: (CRON) INFO (pidfile fd = 3)
Mar 24 13:02:05 osestaging1-discourse-ose cron[30033]: (*system*unattended-upgrades) ERROR (Missing newline before EOF, this crontab file will be ignored)
Mar 24 13:02:05 osestaging1-discourse-ose cron[30033]: (postgres) INSECURE MODE (mode 0600 expected) (crontabs/postgres)
Mar 24 13:02:05 osestaging1-discourse-ose cron[30033]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
Mar 24 13:03:04 osestaging1-discourse-ose cron[30097]: (CRON) INFO (pidfile fd = 3)
Mar 24 13:03:04 osestaging1-discourse-ose cron[30097]: (*system*unattended-upgrades) ERROR (Missing newline before EOF, this crontab file will be ignored)
Mar 24 13:03:04 osestaging1-discourse-ose cron[30097]: (postgres) INSECURE MODE (mode 0600 expected) (crontabs/postgres)
Mar 24 13:03:04 osestaging1-discourse-ose cron[30097]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. I did too much playing around with cron, so I decided to stop & start the container to see what is actually running by default
[root@osestaging1 sites-enabled]# /var/discourse/launcher enter discourse_ose
root@osestaging1-discourse-ose:/var/www/discourse# ps -ef | grep -i cron
root       537   532  0 13:06 ?        00:00:00 runsv cron
root       547   537  0 13:06 ?        00:00:00 cron -f
root       707   689  0 13:06 pts/1    00:00:00 grep -i cron
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. so there's now way to actually add a newline to the end of a file defined by a multi-line block in yaml, I decided to just add a command to append a newline after the file is created
[root@osestaging1 templates]# cat unattended-upgrades.template.yml 
run:
  - file:
	 path: /etc/cron.d/unattended-upgrades
	 contents: |+
		################################################################################
		# File:    /etc/cron.d/unattended-upgrades
		# Version: 0.1
		# Purpose: run unattended-upgrades in lieu of systemd. For more info see
		#           * https://wiki.opensourceecology.org/wiki/Discourse
		#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
		# Author:  Michael Altfield <michael@opensourceecology.org>
		# Created: 2020-03-23
		# Updated: 2020-03-23
		################################################################################
		20 04 * * * root /bin/nice /usr/bin/unattended-upgrades --debug
        

  - exec: /bin/echo -e "\n" >> /etc/cron.d/unattended-upgrades
[root@osestaging1 templates]# 
  1. there's a lot of reports that the error I was seeing is caused by a permission issue with containers with a pam module in pam_loginud.so and the fix is to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=726661
  2. anyway, I did a rebuild with the new template file above and downgraded cron again. We'll see where it is after ~24 hours
apt-get install sudo=1.8.27-1+deb10u1

Mon Mar 23, 2020

  1. continuing from last week, my next task is to write a cron to execute unattended-upgrades in lieu of the systemd timer that doesn't work because the Discourse container doesn't have systemd! https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/
  2. I checked the unattended-upgrades logs; it looks like it last ran when I first built the image last week
root@osestaging1-discourse-ose:/var/www/discourse# tail -f /var/log/unattended-upgrades/unattended-upgrades*.log
==> /var/log/unattended-upgrades/unattended-upgrades-dpkg.log <==

==> /var/log/unattended-upgrades/unattended-upgrades.log <==
2020-03-16 16:31:44,993 INFO Initial blacklist :
2020-03-16 16:31:44,994 INFO Initial whitelist:
2020-03-16 16:31:44,994 INFO Starting unattended upgrades script
2020-03-16 16:31:44,994 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label
=Debian-Security
2020-03-16 16:31:48,327 INFO Checking if system is running on battery is skipped. Please install powermgmt-base package to check 
power status and skip installing updates when the system is running on battery.
2020-03-16 16:31:48,332 INFO Initial blacklist :
2020-03-16 16:31:48,333 INFO Initial whitelist:
2020-03-16 16:31:48,333 INFO Starting unattended upgrades script
2020-03-16 16:31:48,333 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label
=Debian-Security
2020-03-16 16:31:51,502 INFO No packages found that can be upgraded unattended and no pending auto-removals
  1. I kicked-off a dry run to see if there were any security packages in need of updating since last week; looks like there are not, which means I can't test this as-is (unless maybe I forced a downgrade of a package below some security-critical update)
root@osestaging1-discourse-ose:/var/www/discourse# unattended-upgrades --dry-run -d
Checking if system is running on battery is skipped. Please install powermgmt-base package to check power status and skip installing updates when the system is running on battery.
Initial blacklist :
Initial whitelist:
Starting unattended upgrades script
Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
Using (^linux-image-[0-9]+\.[0-9\.]+-.*|^linux-headers-[0-9]+\.[0-9\.]+-.*|^linux-image-extra-[0-9]+\.[0-9\.]+-.*|^linux-modules-[0-9]+\.[0-9\.]+-.*|^linux-modules-extra-[0-9]+\.[0-9\.]+-.*|^linux-signed-image-[0-9]+\.[0-9\.]+-.*|^linux-image-unsigned-[0-9]+\.[0-9\.]+-.*|^kfreebsd-image-[0-9]+\.[0-9\.]+-.*|^kfreebsd-headers-[0-9]+\.[0-9\.]+-.*|^gnumach-image-[0-9]+\.[0-9\.]+-.*|^.*-modules-[0-9]+\.[0-9\.]+-.*|^.*-kernel-[0-9]+\.[0-9\.]+-.*|^linux-backports-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-tools-[0-9]+\.[0-9\.]+-.*|^linux-cloud-tools-[0-9]+\.[0-9\.]+-.*|^linux-buildinfo-[0-9]+\.[0-9\.]+-.*|^linux-source-[0-9]+\.[0-9\.]+-.*) regexp to find kernel packages
Using (^linux-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-signed-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-unsigned-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^gnumach-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-kernel-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-backports-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-cloud-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-buildinfo-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-source-3\.10\.0\-957\.21\.3\.el7\.x86_64$) regexp to find running kernel packages
pkgs that look like they should be upgraded:
Fetched 0 B in 0s (0 B/s)
fetch.run() result: 0
blacklist: []
whitelist: []
No packages found that can be upgraded unattended and no pending auto-removals
root@osestaging1-discourse-ose:/var/www/discourse#
  1. when I initially discovered this issue, I realized that the Discourse container's unattended-upgrades had *not* updated despite a needed security update to git announced as DSA-4581-1 https://www.debian.org/security/2019/dsa-4581
  2. The base OS for the Discourse docker container is Debian 10 = buster
root@osestaging1-discourse-ose:/var/www/discourse# cat /etc/issue
Debian GNU/Linux 10 \n \l

root@osestaging1-discourse-ose:/var/www/discourse# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. The current version of git installed is 1:2.20.1-2+deb10u1
root@osestaging1-discourse-ose:/var/www/discourse# dpkg -l | grep -i git
ii  findutils                       4.6.0+git+20190209-2         amd64        utilities for finding files--find, xargs
ii  git                             1:2.20.1-2+deb10u1           amd64        fast, scalable, distributed revision control system
ii  git-man                         1:2.20.1-2+deb10u1           all          fast, scalable, distributed revision control system (manual pages)
ii  libfuzzy2:amd64                 2.14.1+git20180629.57fcfff-1 amd64        recursive piecewise hashing tool (library)
ii  librtmp1:amd64                  2.4+20151223.gitfa8646d.1-2  amd64        toolkit for RTMP streams (shared library)
ii  libtiff-dev:amd64               4.1.0+git191117-2~deb10u1    amd64        Tag Image File Format library (TIFF), development files
ii  libtiff5:amd64                  4.1.0+git191117-2~deb10u1    amd64        Tag Image File Format (TIFF) library
ii  libtiffxx5:amd64                4.1.0+git191117-2~deb10u1    amd64        Tag Image File Format (TIFF) library -- C++ interface
ii  libwebp6:amd64                  0.6.1-2                      amd64        Lossy compression of digital photographic images.
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. The previous version (1:2.20.1-2) is the version that had this vulnerability
  2. Damn, it looks like there's no other version available though. I guess the patched version is the first and only release since Debian 10 came out
root@osestaging1-discourse-ose:/var/www/discourse# apt-cache policy git
git:
  Installed: 1:2.20.1-2+deb10u1
  Candidate: 1:2.20.1-2+deb10u1
  Version table:
 *** 1:2.20.1-2+deb10u1 500
		500 http://deb.debian.org/debian buster/main amd64 Packages
		500 http://security.debian.org/debian-security buster/updates/main amd64 Packages
		100 /var/lib/dpkg/status
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. checking the list of DSAs from 2020, I see that curl and sudo were also updated with security-critical patches https://www.debian.org/security/2020/
  2. I manually downgraded sudo and confirmed that unattened-upgrade's --dry-run wants to update. cool
root@osestaging1-discourse-ose:/var/www/discourse# apt-get install sudo=1.8.27-1+deb10u1
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be DOWNGRADED:
  sudo
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 0 not upgraded.
Need to get 1,244 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://security.debian.org/debian-security buster/updates/main amd64 sudo amd64 1.8.27-1+deb10u1 [1,244 kB]
Fetched 1,244 kB in 0s (9,060 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
dpkg: warning: downgrading sudo from 1.8.27-1+deb10u2 to 1.8.27-1+deb10u1
(Reading database ... 48062 files and directories currently installed.)
Preparing to unpack .../sudo_1.8.27-1+deb10u1_amd64.deb ...
Unpacking sudo (1.8.27-1+deb10u1) over (1.8.27-1+deb10u2) ...
Setting up sudo (1.8.27-1+deb10u1) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Processing triggers for systemd (241-7~deb10u3) ...
root@osestaging1-discourse-ose:/var/www/discourse# 
root@osestaging1-discourse-ose:/var/www/discourse# unattended-upgrades --dry-run -d
Checking if system is running on battery is skipped. Please install powermgmt-base package to check power status and skip installing updates when the system is running on battery.
Initial blacklist :
Initial whitelist:
Starting unattended upgrades script
Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
Using (^linux-image-[0-9]+\.[0-9\.]+-.*|^linux-headers-[0-9]+\.[0-9\.]+-.*|^linux-image-extra-[0-9]+\.[0-9\.]+-.*|^linux-modules-[0-9]+\.[0-9\.]+-.*|^linux-modules-extra-[0-9]+\.[0-9\.]+-.*|^linux-signed-image-[0-9]+\.[0-9\.]+-.*|^linux-image-unsigned-[0-9]+\.[0-9\.]+-.*|^kfreebsd-image-[0-9]+\.[0-9\.]+-.*|^kfreebsd-headers-[0-9]+\.[0-9\.]+-.*|^gnumach-image-[0-9]+\.[0-9\.]+-.*|^.*-modules-[0-9]+\.[0-9\.]+-.*|^.*-kernel-[0-9]+\.[0-9\.]+-.*|^linux-backports-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-modules-.*-[0-9]+\.[0-9\.]+-.*|^linux-tools-[0-9]+\.[0-9\.]+-.*|^linux-cloud-tools-[0-9]+\.[0-9\.]+-.*|^linux-buildinfo-[0-9]+\.[0-9\.]+-.*|^linux-source-[0-9]+\.[0-9\.]+-.*) regexp to find kernel packages
Using (^linux-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-extra-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-signed-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-image-unsigned-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^kfreebsd-headers-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^gnumach-image-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-modules-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^.*-kernel-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-backports-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-modules-.*-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-cloud-tools-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-buildinfo-3\.10\.0\-957\.21\.3\.el7\.x86_64$|^linux-source-3\.10\.0\-957\.21\.3\.el7\.x86_64$) regexp to find running kernel packages
Checking: sudo ([<Origin component:'main' archive:'stable' origin:'Debian' label:'Debian' site:'deb.debian.org' isTrusted:True>])
pkgs that look like they should be upgraded: sudo
Get:1 http://deb.debian.org/debian buster/main amd64 sudo amd64 1.8.27-1+deb10u2 [1245 kB]
Fetched 1245 kB in 0s (0 B/s)
fetch.run() result: 0
<apt_pkg.AcquireItem object:Status: 2 Complete: 1 Local: 0 IsTrusted: 1 FileSize: 1244824 DestFile:'/var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb' DescURI: 'http://deb.debian.org/debian/pool/main/s/sudo/sudo_1.8.27-1+deb10u2_amd64.deb' ID:1 ErrorText: ''>
check_conffile_prompt(/var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb)
found pkg: sudo
conffile line: /etc/init.d/sudo 1153f6e6fa7c0e2166779df6ad43f1a8
current md5: 1153f6e6fa7c0e2166779df6ad43f1a8
conffile line: /etc/pam.d/sudo 85da64f888739f193fc0fa896680030e
current md5: 85da64f888739f193fc0fa896680030e
conffile line: /etc/sudoers 45437b4e86fba2ab890ac81db2ec3606
current md5: 45437b4e86fba2ab890ac81db2ec3606
conffile line: /etc/sudoers.d/README 8d3cf36d1713f40a0ddc38e1b21a51b6
current md5: 8d3cf36d1713f40a0ddc38e1b21a51b6
blacklist: []
whitelist: []
Option --dry-run given, *not* performing real actions
Packages that will be upgraded: sudo
Writing dpkg log to /var/log/unattended-upgrades/unattended-upgrades-dpkg.log
applying set ['sudo']
debconf: delaying package configuration, since apt-utils is not installed
/usr/bin/dpkg --status-fd 9 --no-triggers --unpack --auto-deconfigure /var/cache/apt/archives/sudo_1.8.27-1+deb10u2_amd64.deb
/usr/bin/dpkg --status-fd 9 --configure --pending
left to upgrade set()
All upgrades installed
InstCount=0 DelCount=0 BrokenCount=0
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. I created a cron job to call unattended-upgrades at 04:20 every morning; we'll see if that kicks-off after 24-hours
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/cron.d/unattended-upgrades 
################################################################################
# File:    /etc/cron.d/unattended-upgrades
# Version: 0.1
# Purpose: run unattended-upgrades in lieu of systemd. For more info see
#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2020-03-23
# Updated: 2020-03-23
################################################################################
20 04 * * * root /usr/bin/unattended-upgrades --debug
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. I documented this in the Discourse install guide https://wiki.opensourceecology.org/wiki/Discourse#unattended-upgrades
  2. now I need to test it (the templates files) though. I tried to rebuild the app, but the rebuild failed with the old "Error response from daemon: container is marked for removal and cannot be started"
[root@osestaging1 discourse]# time /var/discourse/launcher rebuild discourse_ose
...
169:M 23 Mar 2020 10:19:33.054 # Redis is now ready to exit, bye bye...
2020-03-23 10:19:33.127 UTC [52] LOG:  database system is shut down
sha256:6e6c81a3529175c1aa8e3391599499704f3abb9833ca3e943cf1b5443da4f47c
fbf51479947c537d2247bf38bd0ca2f1cb96257dbbf86e93038e6a19f2bab5d6
Removing old container
+ /bin/docker rm discourse_ose
Error response from daemon: container 12bb1e40517bb4893ff428096fa204f145c75d64be6a269cbe3093543373c6a8: driver "overlay2" failed to remove root filesystem: unlinkat /var/lib/docker/overlay2/99f609ae22d509152fd6db0120ba111c4d892b153d41d2e720790c864d5d678a/merged: device or resource busy

starting up existing container
+ /bin/docker start discourse_ose
Error response from daemon: container is marked for removal and cannot be started
Error: failed to start containers: discourse_ose

real    8m39.585s
user    0m1.764s
sys     0m1.684s
[root@osestaging1 discourse]# 
  1. attempting to manually remove the stuck container failed too
[root@osestaging1 discourse]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                PORTS               NAMES
12bb1e40517b        4d92ff0b76a7        "/sbin/boot"        6 days ago          Removal In Progress                       discourse_ose
[root@osestaging1 discourse]# docker rm 12bb1e40517b
Error response from daemon: container 12bb1e40517bb4893ff428096fa204f145c75d64be6a269cbe3093543373c6a8: driver "overlay2" failed to remove root filesystem: unlinkat /var/lib/docker/overlay2/99f609ae22d509152fd6db0120ba111c4d892b153d41d2e720790c864d5d678a/merged: device or resource busy
[root@osestaging1 discourse]# 
  1. A restart changed from "removal in progress' to "dead". Trying to remove it again failed and the state changed back from "dead" to "Removal in progress"
[root@osestaging1 discourse]# systemctl restart docker
[root@osestaging1 discourse]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
12bb1e40517b        4d92ff0b76a7        "/sbin/boot"        6 days ago          Dead                                    discourse_ose
[root@osestaging1 discourse]# docker rm 12bb1e40517b
Error response from daemon: container 12bb1e40517bb4893ff428096fa204f145c75d64be6a269cbe3093543373c6a8: driver "overlay2" failed to remove root filesystem: unlinkat /var/lib/docker/overlay2/99f609ae22d509152fd6db0120ba111c4d892b153d41d2e720790c864d5d678a/merged: device or resource busy
[root@osestaging1 discourse]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                PORTS               NAMES
12bb1e40517b        4d92ff0b76a7        "/sbin/boot"        6 days ago          Removal In Progress                       discourse_ose
[root@osestaging1 discourse]# 
  1. this log popped-up into the systemd log at the same time
Mar 23 10:44:55 osestaging1 dockerd[16920]: time="2020-03-23T10:44:55.578997874Z" level=error msg="Error removing mounted layer 12bb1e40517bb4893ff428096fa204f145c75d64be6a269cbe3093543373c6a8: unlinkat /var/lib/docker/overlay2/99f609ae22d509152fd6db0120ba111c4d892b153d41d2e720790c864d5d678a/merged: device or resource busy"
Mar 23 10:44:55 osestaging1 dockerd[16920]: time="2020-03-23T10:44:55.579614708Z" level=error msg="Handler for DELETE /v1.40/containers/12bb1e40517b returned error: container 12bb1e40517bb4893ff428096fa204f145c75d64be6a269cbe3093543373c6a8: driver \"overlay2\" failed to remove root filesystem: unlinkat /var/lib/docker/overlay2/99f609ae22d509152fd6db0120ba111c4d892b153d41d2e720790c864d5d678a/merged: device or resource busy"
  1. I ended up fixing this by stopping docker, force `rm`ing the docker container directory, and then restarting docker. I documented this on the wiki's Discourse Troubleshooting section https://wiki.opensourceecology.org/wiki/Discourse#Removal_In_Progress
  2. ok, this time the build finished
[root@osestaging1 containers]# time /var/discourse/launcher rebuild discourse_ose
...
2020-03-23 11:14:03.222 UTC [56] LOG:  shutting down
2020-03-23 11:14:03.320 UTC [52] LOG:  database system is shut down
sha256:ee4f4a7346c88cd3d92873d71bb1aff73aff50c16b94b45ce758b2d39822f88b
ad165ac490026610471681616a77bf335f0d99bb29d2183522306d72493f525a

+ /bin/docker run --shm-size=512m -d --restart=always -e LANG=en_US.UTF-8 -e RAILS_ENV=production -e UNICORN_WORKERS=2 -e UNICORN_SIDEKIQS=1 -e RUBY_GLOBAL_METHOD_CACHE_SIZE=131072 -e RUBY_GC_HEAP_GROWTH_MAX_SLOTS=40000 -e RUBY_GC_HEAP_INIT_SLOTS=400000 -e RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.5 -e DISCOURSE_DB_SOCKET=/var/run/postgresql -e DISCOURSE_DB_HOST= -e DISCOURSE_DB_PORT= -e DISCOURSE_HOSTNAME=discourse.opensourceecology.org -e DISCOURSE_DEVELOPER_EMAILS=discourse@opensourceecology.org,ops@opensourceecology.org,marcin@opensourceecology.org,michael@opensourceecology.org -e DISCOURSE_SMTP_ADDRESS=172.17.0.1 -e DISCOURSE_SMTP_PORT=25 -e DISCOURSE_SMTP_AUTHENTICATION=none -e DISCOURSE_SMTP_OPENSSL_VERIFY_MODE=none -e DISCOURSE_SMTP_ENABLE_START_TLS=false -h osestaging1-discourse-ose -e DOCKER_HOST_IP=172.17.0.1 --name discourse_ose -t -v /var/discourse/shared/standalone:/shared -v /var/discourse/shared/standalone/log/var-log:/var/log --mac-address 02:fc:97:b8:b4:0d --cap-add NET_ADMIN local_discourse/discourse_ose /sbin/boot
eedfb91349dcef0cc2e7cc4fde8291a4fe4ab72709cf7ec495277b0339966ab3

real    7m56.057s
user    0m2.184s
sys     0m1.965s
[root@osestaging1 containers]# 
  1. the new system template for unattended-upgrades appears to work
root@osestaging1-discourse-ose:/etc/cron.d# cat /etc/cron.d/unattended-upgrades 
#!/bin/bash
################################################################################
# File:    /etc/cron.d/unattended-upgrades
# Version: 0.1
# Purpose: run unattended-upgrades in lieu of systemd. For more info see
#           * https://wiki.opensourceecology.org/wiki/Discourse
#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2020-03-23
# Updated: 2020-03-23
################################################################################
20 04 * * * root /bin/nice /usr/bin/unattended-upgrades --debugroot@osestaging1-discourse-ose:/etc/cron.d# 
  1. ugh, there's a shebang in that cron file! And no trailing newline. Let me try to fix that.
  2. apparently unattended-upgrades is run on the rebuild, which indicates that there's now no new packages to be installed. so to test it, I'll need to downgrade again and wait
root@osestaging1-discourse-ose:/etc/cron.d# tail /var/log/unattended-upgrades/unattended-upgrades.log 
2020-03-23 11:25:35,806 INFO Initial blacklist : 
2020-03-23 11:25:35,807 INFO Initial whitelist: 
2020-03-23 11:25:35,807 INFO Starting unattended upgrades script
2020-03-23 11:25:35,807 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
2020-03-23 11:25:39,280 INFO Checking if system is running on battery is skipped. Please install powermgmt-base package to check power status and skip installing updates when the system is running on battery.
2020-03-23 11:25:39,285 INFO Initial blacklist : 
2020-03-23 11:25:39,285 INFO Initial whitelist: 
2020-03-23 11:25:39,285 INFO Starting unattended upgrades script
2020-03-23 11:25:39,285 INFO Allowed origins are: origin=Debian,codename=buster,label=Debian, origin=Debian,codename=buster,label=Debian-Security
2020-03-23 11:25:42,420 INFO No packages found that can be upgraded unattended and no pending auto-removals
root@osestaging1-discourse-ose:/etc/cron.d# 
  1. looks like the trailing newlines can be fixed by switching the yaml block character = pipe ("|") with a "pipe plus" (|+) for preserving newlines at the end https://yaml.org/YAML_for_ruby.html#three_trailing_newlines_in_literals
[root@osestaging1 discourse]# cat templates/unattended-upgrades.template.yml
run:
  - file:
	 path: /etc/cron.d/unattended-upgrades
	 contents: |+
		################################################################################
		# File:    /etc/cron.d/unattended-upgrades
		# Version: 0.1
		# Purpose: run unattended-upgrades in lieu of systemd. For more info see
		#           * https://wiki.opensourceecology.org/wiki/Discourse
		#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
		# Author:  Michael Altfield <michael@opensourceecology.org>
		# Created: 2020-03-23
		# Updated: 2020-03-23
		################################################################################
		20 04 * * * root /bin/nice /usr/bin/unattended-upgrades --debug

(reverse-i-search)`time': cat << EOF > /etc/cron.d/docker-prune40 07 * * * root ^Cme /bin/nice /usr/local/bin/docker-purge.sh &>> /var/log/docker/prune.log
EOF

[root@osestaging1 discourse]# 
  1. ugh, that didn't seem to help
root@osestaging1-discourse-ose:/etc/cron.d# cat unattended-upgrades
################################################################################
# File:    /etc/cron.d/unattended-upgrades
# Version: 0.1
# Purpose: run unattended-upgrades in lieu of systemd. For more info see
#           * https://wiki.opensourceecology.org/wiki/Discourse
#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2020-03-23
# Updated: 2020-03-23
################################################################################
20 04 * * * root /bin/nice /usr/bin/unattended-upgrades --debugroot@osestaging1-discourse-ose:/etc/cron.d# 
  1. I even added an additional line with spaces to it (yaml shit) that you probably can't see here on the wiki; no dice
[root@osestaging1 discourse]# cat templates/unattended-upgrades.template.yml 
run:
  - file:
	 path: /etc/cron.d/unattended-upgrades
	 contents: |+
		################################################################################
		# File:    /etc/cron.d/unattended-upgrades
		# Version: 0.1
		# Purpose: run unattended-upgrades in lieu of systemd. For more info see
		#           * https://wiki.opensourceecology.org/wiki/Discourse
		#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
		# Author:  Michael Altfield <michael@opensourceecology.org>
		# Created: 2020-03-23
		# Updated: 2020-03-23
		################################################################################
		20 04 * * * root /bin/nice /usr/bin/unattended-upgrades --debug
        

[root@osestaging1 discourse]# 
[root@osestaging1 discourse]# ./launcher enter discourse_ose
root@osestaging1-discourse-ose:/var/www/discourse# cat /etc/cron.d/unattended-upgrades 
################################################################################
# File:    /etc/cron.d/unattended-upgrades
# Version: 0.1
# Purpose: run unattended-upgrades in lieu of systemd. For more info see
#           * https://wiki.opensourceecology.org/wiki/Discourse
#           * https://meta.discourse.org/t/does-discourse-container-use-unattended-upgrades/136296/3
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2020-03-23
# Updated: 2020-03-23
################################################################################
20 04 * * * root /bin/nice /usr/bin/unattended-upgrades --debugroot@osestaging1-discourse-ose:/var/www/discourse# 
  1. well, nothing works to get that trailing newline (">+" is worse). I know crontab often wants a trailing new line; we'll just have to check to see if this cron file actually works or not.
  2. ...
  3. looks like my last week addition of the docker prune cron/script has some issues
[root@osestaging1 discourse]# tail -n25 /var/log/docker/prune.log 
+ DATE=/bin/date
+ stamp=
+ echo ================================================================================
================================================================================
+ echo 'INFO: Beginning docker pruning on '
INFO: Beginning docker pruning on 
+ container prune --force --filter until=672h
/usr/local/bin/docker-purge.sh: line 32: container: command not found

real    0m0.029s
user    0m0.001s
sys     0m0.000s
+ image prune --force --all --filter until=672h
/usr/local/bin/docker-purge.sh: line 33: image: command not found

real    0m0.021s
user    0m0.001s
sys     0m0.000s
+ system prune --force --all --filter until=672h
/usr/local/bin/docker-purge.sh: line 34: system: command not found

real    0m0.035s
user    0m0.001s
sys     0m0.000s
+ exit 0
[root@osestaging1 discourse]# 
  1. Hmm...even a manual run fails similarly
[root@osestaging1 discourse]# /usr/local/bin/docker-purge.sh
+ NICE=/bin/nice
+ DOCKER=/bin/docker
+ DATE=/bin/date
+ stamp=
+ echo ================================================================================
================================================================================
+ echo 'INFO: Beginning docker pruning on '
INFO: Beginning docker pruning on 
+ container prune --force --filter until=672h
/usr/local/bin/docker-purge.sh: line 32: container: command not found

real    0m0.011s
user    0m0.001s
sys     0m0.002s
+ image prune --force --all --filter until=672h
/usr/local/bin/docker-purge.sh: line 33: image: command not found

real    0m0.004s
user    0m0.002s
sys     0m0.000s
+ system prune --force --all --filter until=672h
/usr/local/bin/docker-purge.sh: line 34: system: command not found

real    0m0.007s
user    0m0.000s
sys     0m0.001s
+ exit 0
[root@osestaging1 discourse]# 
  1. ah, it looks like the issue is that bash attempted to replace variables when creating the file
[root@osestaging1 discourse]# cat /usr/local/bin/docker-purge.sh
#!/bin/bash
set -x
################################################################################
# Author:  Michael Altfield <michael at opensourceecology dot org>
# Created: 2020-03-08
# Updated: 2020-03-23
# Version: 0.2
# Purpose: Cleanup docker garbage to prevent disk fill issues
################################################################################

############
# SETTINGS #
############

NICE='/bin/nice'
DOCKER='/bin/docker'
DATE='/bin/date'

##########
# HEADER #
##########

stamp=
echo "================================================================================"
echo "INFO: Beginning docker pruning on "

###################
# DOCKER COMMANDS #
###################

# automatically clean unused container and images that are >= 4 weeks old
time   container prune --force --filter until=672h
time   image prune --force --all --filter until=672h
time   system prune --force --all --filter until=672h

# exit cleanly
exit 0
[root@osestaging1 discourse]# 
  1. I updated the documentation to use a different syntax when creating the files so the literal variables would be written to the file.
  2. ...
  3. the next major task is varnish
  4. varnish will be just a simple config that caches for some short time. Even if this was a cache for 1 second, it would make a huge impact on our server if we got >1 request per second of a single URL (ie: a reddit hug of death) or just loads of the front page.
  5. but I think we'll start with a 60-second cache and see how that goes. I don't see a 60-second delay of a post to a new topic as a big issue, and the performance gains could be immense
  6. I started by copying the config from the wiki's varnish config and modifying it to my eye; this hasn't been tested at all
################################################################################
# File:    discourse.opensourceecology.org.vcl
# Version: 0.1
# Purpose: Confg file for ose's discourse site
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2019-03-23
# Updated: 2019-03-23
################################################################################

vcl 4.0;

##########################
# VHOST-SPECIFIC BACKEND #
##########################

backend discourse_opensourceecology_org {
	.host = "127.0.0.1";
	.port = "8000";
}

sub vcl_recv {

	if ( req.http.host == "discourse.opensourceecology.org" ){

		set req.backend_hint = discourse_opensourceecology_org;

		# Serve objects up to 2 minutes past their expiry if the backend
  		# is slow to respond.
  		# set req.grace = 120s;
  		set req.http.X-Forwarded-For = client.ip;

  		# This uses the ACL action called "purge". Basically if a request to
		# PURGE the cache comes from anywhere other than localhost, ignore it.
  		if (req.method == "PURGE") {
  			if (!client.ip ~ purge) {
  				return (synth(405, "Not allowed."));
  			} else {
  				return (purge);
  			}
  		}

  		# Pass any requests that Varnish does not understand straight to the backend.
  		if (
	 	req.method != "GET" && req.method != "HEAD" &&
  	 	req.method != "PUT" && req.method != "POST" &&
  	 	req.method != "TRACE" && req.method != "OPTIONS" &&
  	 	req.method != "DELETE"
		) {
  			return (pipe);
  		} /* Non-RFC2616 or CONNECT which is weird. */

  		# Pass anything other than GET and HEAD directly.
  		if (req.method != "GET" && req.method != "HEAD") {
  			return (pass);
  		}      /* We only deal with GET and HEAD by default */

  		# Pass requests from logged-in users directly.
  		# Only detect cookies with "session" and "Token" in file name, otherwise nothing get cached.
  		if (req.http.Authorization || req.http.Cookie ~ "session" || req.http.Cookie ~ "Token") {
  			return (pass);
  		} /* Not cacheable by default */

  		# Pass any requests with the "If-None-Match" header directly.
  		if (req.http.If-None-Match) {
  			return (pass);
  		}

  		# Force lookup if the request is a no-cache request from the client.
  		if (req.http.Cache-Control ~ "no-cache") {
  			ban(req.url);
  		}

  		# normalize Accept-Encoding to reduce vary
  		if (req.http.Accept-Encoding) {
  			if (req.http.User-Agent ~ "MSIE 6") {
  				unset req.http.Accept-Encoding;
  			} elsif (req.http.Accept-Encoding ~ "gzip") {
  				set req.http.Accept-Encoding = "gzip";
  			} elsif (req.http.Accept-Encoding ~ "deflate") {
  				set req.http.Accept-Encoding = "deflate";
  			} else {
  				unset req.http.Accept-Encoding;
  			}
  		}

  		return (hash);

	}

}

sub vcl_hash {

	if ( req.http.host == "discourse.opensourceecology.org" ){

		# TODO

	}
}

sub vcl_backend_response {

	if ( beresp.backend.name == "discourse_opensourceecology_org" ){

		# Avoid caching error responses
		if ( beresp.status != 200 && beresp.status != 203 && beresp.status != 300 && beresp.status != 301 && beresp.status != 302 && beresp.status != 304 && beresp.status != 307 && beresp.status != 410 && beresp.status != 404 ) {
			set beresp.ttl   = 0s;
			set beresp.grace = 15s;
			return (deliver);
		}

		if (!beresp.ttl > 0s) {
			set beresp.uncacheable = true;
			return (deliver);
		}

		if (beresp.http.Set-Cookie) {
			set beresp.uncacheable = true;
			return (deliver);
		}

		if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)") {
			set beresp.uncacheable = true;
			return (deliver);
		}
 
		if (beresp.http.Authorization && !beresp.http.Cache-Control ~ "public") {
			set beresp.uncacheable = true;
			return (deliver);
		}

		# always cache for 1-5 minutes with Discourse; we shouldn't set this to longer
		# because Discourse doesn't support PURGE. For more info, see:
		#  * https://meta.discourse.org/t/discourse-purge-cache-method-on-content-changes/132917/15
		set beresp.ttl = 1m;
		set beresp.grace = 5m;
		return (deliver);

	}

}

sub vcl_synth {

	if ( req.http.host == "discourse.opensourceecology.org" ){

		# TODO

	}

}

sub vcl_pipe {

	if ( req.http.host == "discourse.opensourceecology.org" ){

		# Note that only the first request to the backend will have
		# X-Forwarded-For set.  If you use X-Forwarded-For and want to
		# have it set for all requests, make sure to have:
		# set req.http.connection = "close";
 
		# This is otherwise not necessary if you do not do any request rewriting.
 
		#set req.http.connection = "close";

	}
}

sub vcl_hit {

	if ( req.http.host == "discourse.opensourceecology.org" ){

		# this is left-over from copying this config from the wiki's varnish config,
		# but it won't actually be used until Discourse implements PURGE

		if (req.method == "PURGE") {
			ban(req.url);
			return (synth(200, "Purged"));
		}

		if (!obj.ttl > 0s) {
			return (pass);
		}

	}
}

sub vcl_miss {

	if ( req.http.host == "discourse.opensourceecology.org" ){

		if (req.method == "PURGE")  {
			return (synth(200, "Not in cache"));
		}

	}
}

sub vcl_deliver {

	if ( req.http.host == "discourse.opensourceecology.org" ){

		# TODO

	}
}

Mon Mar 16, 2020

  1. continuing from last week, I'm in the process of doing the end-to-end Discourse install again on the newly-sync'd staging server
  2. first of all, I was surprised to realize that--even after I clobbered the staging server with the sync from prod, I was actually able to ssh into it.
  3. but it looks like I already exclude the /etc/openpvn directory from the rsync, which also would prevent it from deleting any contents of that dir
[root@opensourceecology ~]# grep -A22 'rsync' /root/bin/syncToStaging.sh 
# rsync-path makes a non-root ssh user become root on the staging side
# exclude /home/b2user/sync* just saves space & time
# exclude /home/stagingsync* because 'stagingsync' should be able to access
#                            staging but not production
# exclude /etc/group so 'stagingsync' is in the 'sshaccess' group on staging
#                    but not on prod
# exclude /etc/sudo* as we want 'stagingsync' NOPASSWD on staging, not root

time nice rsync \
		-e "ssh -p ${STAGING_SSH_PORT} -i /root/.ssh/id_rsa.201910" \
		--bwlimit=3000 \
		--numeric-ids \
		--delete \
		--rsync-path="sudo rsync" \
		--exclude=/root \
		--exclude=/run \
		--exclude=/home/b2user/sync* \
		--exclude=/home/stagingsync* \
		--exclude=/etc/sudo* \
		--exclude=/etc/group \
		--exclude=/etc/openvpn \
		--exclude=/usr/share/easy-rsa \
		--exclude=/dev \
		--exclude=/sys \
		--exclude=/proc \
		--exclude=/boot/ \
		--exclude=/etc/sysconfig/network* \
		--exclude=/tmp \
		--exclude=/var/tmp \
		--exclude=/etc/fstab \
		--exclude=/etc/mtab \
		--exclude=/etc/mdadm.conf \
		--exclude=/etc/hostname \
		-av \
		--progress \
		/ ${SYNC_USERNAME}@${STAGING_HOST}:/
[root@opensourceecology ~]# 
  1. indeed, the previous config is there
[root@osestaging1 ~]# ls -lah /etc/openvpn
total 24K
drwxr-xr-x.   4 root root              4.0K Oct  3 07:29 .
drwxr-xr-x. 101 root root               12K Mar  8 18:25 ..
drwxr-x---.   2 root systemd-bus-proxy 4.0K Dec 28 13:06 client
drwxr-x---.   2 root systemd-bus-proxy 4.0K Feb 20  2019 server
[root@osestaging1 ~]# ls -lah /etc/openvpn/client
total 40K
drwxr-x---. 2 root systemd-bus-proxy 4.0K Dec 28 13:06 .
drwxr-xr-x. 4 root root              4.0K Oct  3 07:29 ..
-rw-------. 1 root root                19 Feb 16 08:49 auth.txt
-rw-------. 1 root root              1.9K Oct  3 13:33 ca.crt
-rw-------. 1 root root              3.7K Dec 16 10:23 client.conf
-rwx------. 1 root root               255 Dec 16 10:22 connect.sh
-rw-------. 1 root root              5.6K Oct  3 13:33 osestaging1.crt
-rw-------. 1 root root              1.7K Oct  3 13:33 osestaging1.key
-rw-------. 1 root root               636 Oct  3 13:33 ta.key
[root@osestaging1 ~]# 
  1. The `docker info` command is still hanging same as last time
[root@osestaging1 discourse]# docker info



^C
[root@osestaging1 discourse]#
  1. I gave it a restart, and that worked!
[root@osestaging1 discourse]# systemctl restart docker
[root@osestaging1 discourse]# docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.7
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-957.21.3.el7.x86_64
 Operating System: CentOS Linux 7 (Core) (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 1.748GiB
 Name: osestaging1
 ID: POXN:CAJH:TQMM:KJUO:7FCA:AHKJ:6SO3:RZ2S:MXLZ:SC4W:PSLY:2A5E
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[root@osestaging1 discourse]#
  1. ok, continuing where I left-off with the documentation, let's try the `docker build` step
[root@osestaging1 discourse]# time nice docker build --tag 'discourse_ose' /var/discourse/image/base/
Sending build context to Docker daemon  35.33kB
Step 1/61 : FROM debian:buster-slim
buster-slim: Pulling from library/debian
68ced04f60ab: Pull complete
Digest: sha256:b137d31f0ebcce71c9dde707975e1d6582afa178e7033ccb341ddba04d807043
Status: Downloaded newer image for debian:buster-slim
...
  1. The build finished successfully, and I updated the documentation's validation step
  2. did a ton of other documentation changes as needed, and made it to the end of the install guide.
  3. the guide was missing the necessary `./launcher bootstrap` (or `./launcher rebuild`?) step
  4. unfortunately, this failed in an endless loop
[root@osestaging1 discourse]# /var/discourse/launcher rebuild discourse_ose
Ensuring launcher is up to date
Fetching origin
remote: Enumerating objects: 6, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 6 (delta 2), reused 2 (delta 2), pack-reused 0
Unpacking objects: 100% (6/6), done.
From https://github.com/discourse/discourse_docker
   bb9a173..b0c92ba  master     -> origin/master
Updating Launcher
Updating bb9a173..b0c92ba
error: Your local changes to the following files would be overwritten by merge:
		launcher
Please, commit your changes or stash them before you can merge.
Aborting
failed to update
Ensuring launcher is up to date
Fetching origin
Updating Launcher
Updating bb9a173..b0c92ba
error: Your local changes to the following files would be overwritten by merge:
		launcher
Please, commit your changes or stash them before you can merge.
Aborting
failed to update
Ensuring launcher is up to date
Fetching origin
Updating Launcher
Updating bb9a173..b0c92ba
...
  1. yeah, so it's refusing to do the `git pull` step because I've modified the 'launcher' file.
  2. I did some cleanup of the updating sections and added a section to address this issue to the Troubleshooting section. I updated it all to link back to the relevant install guide section so we don't have the commands listed twice in distinct locations
  3. I also did a diff after moving the files out of the way; looks like the only thing that *really* changed is they've removed the absolute path to bash. Shouldn't be an issue
[root@osestaging1 discourse]# diff "${vhostDir}/image/base/install-nginx.${stamp}" "${vhostDir}/image/base/install-nginx"
21,25d20
< # mod_security --maltfield
< apt-get install -y libmodsecurity-dev modsecurity-crs
< cd /tmp
< git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git
< 
39,40c34
< #./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli
< ./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli --add-module=/tmp/ModSecurity-nginx
---
> ./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli
47d40
< rm -fr /tmp/ModSecurity-nginx
[root@osestaging1 discourse]# diff "${vhostDir}/launcher.${stamp}" "${vhostDir}/launcher"
91,92c91
< #image="discourse/base:2.0.20200220-2221"
< image="discourse_ose"
---
> image="discourse/base:2.0.20200220-2221"
792c791
<           exec /bin/bash $0 "${args[@]}" # $@ is empty, because of shift at the beginning. Use BASH_ARGV instead.
---
>           exec bash $0 "${args[@]}" # $@ is empty, because of shift at the beginning. Use BASH_ARGV instead.
[root@osestaging1 discourse]# 
  1. now that `git pull` is working, I tried the rebuild, but it failed very fast because it couldn't find the nginx config file from _inside_ the container
[root@osestaging1 base]# time /var/discourse/rebuild discourse_ose
bash: /var/discourse/rebuild: No such file or directory

real    0m0.003s
user    0m0.000s
sys     0m0.003s
[root@osestaging1 base]# time /var/discourse/launcher rebuild discourse_ose
fatal: ref HEAD is not a symbolic ref
cd /pups && git pull && /pups/bin/pups --stdin
Already up to date.
I, [2020-03-16T15:13:05.278467 #1]  INFO -- : Loading --stdin
I, [2020-03-16T15:13:05.302805 #1]  INFO -- : File > /etc/runit/1.d/01-iptables  chmod: +x  chown:
I, [2020-03-16T15:13:05.313448 #1]  INFO -- : File > /etc/runit/1.d/remove-old-socket  chmod: +x  chown:
I, [2020-03-16T15:13:05.323923 #1]  INFO -- : File > /etc/runit/3.d/remove-old-socket  chmod: +x  chown:


FAILED
--------------------
Errno::ENOENT: No such file or directory @ rb_sysopen - /etc/nginx/conf.d/discourse.conf
Location of failure: /pups/lib/pups/replace_command.rb:8:in `read'
replace failed with the params {"filename"=>"/etc/nginx/conf.d/discourse.conf", "from"=>"/listen 80;/", "to"=>"listen unix:/shared/nginx.http.sock;\nset_real_ip_from unix:;\n"}
b01e5b2ba3b6238a1ca19aa8cf66cac124ea03d2c3c90f1b517f4f79bb1c3635
 FAILED TO BOOTSTRAP  please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.

real    0m16.270s
user    0m1.307s
sys     0m1.122s
[root@osestaging1 base]# 
  1. I suspect that the order of the templates in the container's yaml file is important, but only for the first run; I moved the 'web.socketed.template.yml' file after the 'web.template.yml' file in the list
[root@osestaging1 discourse]# grep -A10 'templates:' containers/discourse_ose.yml
templates:
  - "templates/iptables.template.yml"
  - "templates/web.modsecurity.template.yml"
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
  - "templates/web.socketed.template.yml"
  - "templates/web.ratelimited.template.yml"
## Uncomment these two lines if you wish to add Lets Encrypt (https)
  #- "templates/web.ssl.template.yml"
  #- "templates/web.letsencrypt.ssl.template.yml"
[root@osestaging1 discourse]# 
  1. this time it made it further, but died on the iptables one.
[root@osestaging1 discourse]# time /var/discourse/launcher rebuild discourse_ose
fatal: ref HEAD is not a symbolic ref
cd /pups && git pull && /pups/bin/pups --stdin
Already up to date.
I, [2020-03-16T15:17:55.659292 #1]  INFO -- : Loading --stdin
I, [2020-03-16T15:17:55.676426 #1]  INFO -- : File > /etc/runit/1.d/01-iptables  chmod: +x  chown:
I, [2020-03-16T15:17:55.676878 #1]  INFO -- : > sudo apt-get update
I, [2020-03-16T15:17:59.247188 #1]  INFO -- : Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Hit:2 http://deb.debian.org/debian buster InRelease
Get:3 http://deb.debian.org/debian buster-updates InRelease [49.3 kB]
Hit:4 https://deb.nodesource.com/node_10.x buster InRelease
Hit:5 http://apt.postgresql.org/pub/repos/apt buster-pgdg InRelease
Fetched 115 kB in 2s (72.1 kB/s)
Reading package lists...

I, [2020-03-16T15:17:59.247809 #1]  INFO -- : > sudo apt-get install -y modsecurity-crs
I, [2020-03-16T15:18:01.149772 #1]  INFO -- : Reading package lists...
Building dependency tree...
Reading state information...
modsecurity-crs is already the newest version (3.1.0-1+deb10u1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

I, [2020-03-16T15:18:01.150054 #1]  INFO -- : > cp /etc/modsecurity/modsecurity.conf-recommended /etc/modsecurity/modsecurity.conf
I, [2020-03-16T15:18:01.157756 #1]  INFO -- :
I, [2020-03-16T15:18:01.157936 #1]  INFO -- : > sed -i 's/SecRuleEngine DetectionOnly/SecRuleEngine On/' /etc/modsecurity/modsecurity.conf
I, [2020-03-16T15:18:01.164284 #1]  INFO -- :
I, [2020-03-16T15:18:01.164433 #1]  INFO -- : > sed -i 's^\(\s*\)[^#]*SecRequestBodyInMemoryLimit\(.*\)^\1#SecRequestBodyInMemoryLimit\2^' /etc/modsecurity/modsecurity.conf
I, [2020-03-16T15:18:01.170804 #1]  INFO -- :
I, [2020-03-16T15:18:01.170931 #1]  INFO -- : > sed -i '/nginx/! s%^\(\s*\)[^#]*SecAuditLog \(.*\)%#\1SecAuditLog \2\n\1SecAuditLog /var/log/nginx/modsec_audit.log%' /etc/modsecurity/modsecurity.conf
I, [2020-03-16T15:18:01.176752 #1]  INFO -- :
I, [2020-03-16T15:18:01.181918 #1]  INFO -- : File > /etc/nginx/conf.d/modsecurity.include  chmod:   chown:


FAILED
--------------------
Errno::ENOENT: No such file or directory @ rb_sysopen - /etc/nginx/conf.d/discourse.conf
Location of failure: /pups/lib/pups/replace_command.rb:8:in `read'
replace failed with the params {"filename"=>"/etc/nginx/conf.d/discourse.conf", "from"=>"/server.+{/", "to"=>"server {\n  modsecurity on;\n  modsecurity_rules_file /etc/nginx/conf.d/modsecurity.include;"}
d1526331637357ec8ad42ede068e732b410ac166ce50359664217e7cffa5bb1b
 FAILED TO BOOTSTRAP  please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.

real    0m21.712s
user    0m1.268s
sys     0m1.063s
[root@osestaging1 discourse]# 
  1. I moved the mod-security one later too
[root@osestaging1 discourse]# grep -A10 'templates:' containers/discourse_ose.yml
templates:
  - "templates/iptables.template.yml"
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
  - "templates/web.socketed.template.yml"
  - "templates/web.modsecurity.template.yml"
  - "templates/web.ratelimited.template.yml"
## Uncomment these two lines if you wish to add Lets Encrypt (https)
  #- "templates/web.ssl.template.yml"
  #- "templates/web.letsencrypt.ssl.template.yml"
[root@osestaging1 discourse]# 
  1. it failed again stating that the DISCOURES_HOSTNAME is invalid. Right, it is; I updated our documentation
183:M 16 Mar 2020 15:24:21.630 * Ready to accept connections
I, [2020-03-16T15:24:31.565664 #1]  INFO -- :
I, [2020-03-16T15:24:31.566094 #1]  INFO -- : > thpoff echo "thpoff is installed!"
I, [2020-03-16T15:24:31.575799 #1]  INFO -- : thpoff is installed!

I, [2020-03-16T15:24:31.576060 #1]  INFO -- : > /usr/local/bin/ruby -e 'if ENV["DISCOURSE_SMTP_ADDRESS"] == "smtp.example.com"; puts "Aborting! Mail is not configured!"; exit 1; end'
I, [2020-03-16T15:24:31.735726 #1]  INFO -- :
I, [2020-03-16T15:24:31.736005 #1]  INFO -- : > /usr/local/bin/ruby -e 'if ENV["DISCOURSE_HOSTNAME"] == "discourse.example.com"; puts "Aborting! Domain is not configured!"; exit 1; end'
I, [2020-03-16T15:24:31.920727 #1]  INFO -- : Aborting! Domain is not configured!

I, [2020-03-16T15:24:31.921338 #1]  INFO -- : Terminating async processes
I, [2020-03-16T15:24:31.921425 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/10/bin/postmaster -D /etc/postgresql/10/main pid: 66
I, [2020-03-16T15:24:31.921482 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 183
2020-03-16 15:24:31.923 UTC [66] LOG:  received fast shutdown request
183:signal-handler (1584372271) Received SIGTERM scheduling shutdown...
183:M 16 Mar 2020 15:24:31.923 # User requested shutdown...
183:M 16 Mar 2020 15:24:31.923 * Saving the final RDB snapshot before exiting.
2020-03-16 15:24:31.933 UTC [66] LOG:  aborting any active transactions
183:M 16 Mar 2020 15:24:31.945 * DB saved on disk
183:M 16 Mar 2020 15:24:31.945 # Redis is now ready to exit, bye bye...
2020-03-16 15:24:31.946 UTC [66] LOG:  worker process: logical replication launcher (PID 75) exited with exit code 1
2020-03-16 15:24:31.946 UTC [70] LOG:  shutting down
2020-03-16 15:24:32.050 UTC [66] LOG:  database system is shut down


FAILED
--------------------
Pups::ExecError: /usr/local/bin/ruby -e 'if ENV["DISCOURSE_HOSTNAME"] == "discourse.example.com"; puts "Aborting! Domain is not configured!"; exit 1; end' failed with return #<Process::Status: pid 193 exit 1>
Location of failure: /pups/lib/pups/exec_command.rb:112:in `spawn'
exec failed with the params "/usr/local/bin/ruby -e 'if ENV[\"DISCOURSE_HOSTNAME\"] == \"discourse.example.com\"; puts \"Aborting! Domain is not configured!\"; exit 1; end'"
c32f32b7d350c1e679daad616d2bfd8213fb475bbb84136c73ceb0fc51ba5029
 FAILED TO BOOTSTRAP  please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.

real    0m46.492s
user    0m1.246s
sys     0m1.070s
[root@osestaging1 discourse]# time /var/discourse/launcher rebuild discourse_ose
  1. I updated the documentation with a section to update the container's yaml vars for DISCOURSE_HOSTNAME and DISCOURSE_DEVELOPER_EMAILS.
[root@osestaging1 discourse]# grep -C1 -E 'DISCOURSE_HOSTNAME|DISCOURSE_DEVELOPER_EMAILS' /var/discourse/containers/discourse_ose.yml
env:
  DISCOURSE_HOSTNAME: 'discourse@opensourceecology.org,ops@opensourceecology.org,marcin@opensourceecology.org,michael@opensourceecology.org'


  DISCOURSE_DEVELOPER_EMAILS: discourse.opensourceecology.org

--
  ## Required. Discourse will not work with a bare IP number.
  #DISCOURSE_HOSTNAME: 'discourse.example.com'

--
  ## on initial signup example 'user1@example.com,user2@example.com'
  #DISCOURSE_DEVELOPER_EMAILS: 'me@example.com,you@example.com'

[root@osestaging1 discourse]# 
  1. It failed again, but this time I couldn't actually find any error in the output except that the command exited 1; not sure why
[root@osestaging1 discourse]# time /var/discourse/launcher rebuild discourse_ose
...

##### Seed from /var/www/discourse/db/fixtures/010_uploads.rb
 - Upload {:id=>-1, :user_id=>-1, :original_filename=>"d-logo-sketch.png", :url=>"/images/d-logo-sketch.png", :filesize=>14461, :extension=>"png", :sha1=>"_aa4aed9d6276bab017d3991051fbb9177783abe"}
 - Upload {:id=>-2, :user_id=>-1, :original_filename=>"d-logo-sketch-small.png", :url=>"/images/d-logo-sketch-small.png", :filesize=>6175, :extension=>"png", :sha1=>"_bc7909b0ece56689d3551a51911e6ee1810ad31"}
 - Upload {:id=>-3, :user_id=>-1, :original_filename=>"default-favicon.ico", :url=>"/images/default-favicon.ico", :filesize=>6518, :extension=>"ico", :sha1=>"_bacea4851373b7c6c0ed37c8576245cc8eeef34"}
 - Upload {:id=>-4, :user_id=>-1, :original_filename=>"default-apple-touch-icon.png", :url=>"/images/default-apple-touch-icon.png", :filesize=>8166, :extension=>"png", :sha1=>"_9cbb8e9d5ecd0c8ec8f48ded2f11025970ab194"}
 - Upload {:id=>-5, :user_id=>-1, :original_filename=>"discourse-logo-sketch.png", :url=>"/images/discourse-logo-sketch.png", :filesize=>169105, :extension=>"png", :sha1=>"_abaabe41348866838adb7a011cdd530175a53df"}
 - Upload {:id=>-6, :user_id=>-1, :original_filename=>"discourse-logo-sketch-small.png", :url=>"/images/discourse-logo-sketch-small.png", :filesize=>62655, :extension=>"png", :sha1=>"_129430568242d1b7f853bb13ebea28b3f6af4e7"}

##### Seed from /var/www/discourse/db/fixtures/500_categories.rb

I, [2020-03-16T15:52:02.818452 #1]  INFO -- : Terminating async processes
I, [2020-03-16T15:52:02.818568 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/10/bin/postmaster -D /etc/postgresql/10/main pid: 51
I, [2020-03-16T15:52:02.818665 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 168
2020-03-16 15:52:02.832 UTC [51] LOG:  received fast shutdown request
2020-03-16 15:52:02.855 UTC [51] LOG:  aborting any active transactions
2020-03-16 15:52:02.874 UTC [51] LOG:  worker process: logical replication launcher (PID 60) exited with exit code 1
2020-03-16 15:52:02.875 UTC [55] LOG:  shutting down
168:signal-handler (1584373922) Received SIGTERM scheduling shutdown...
168:M 16 Mar 2020 15:52:02.843 # User requested shutdown...
168:M 16 Mar 2020 15:52:02.843 * Saving the final RDB snapshot before exiting.
168:M 16 Mar 2020 15:52:02.884 * DB saved on disk
168:M 16 Mar 2020 15:52:02.884 # Redis is now ready to exit, bye bye...
2020-03-16 15:52:03.220 UTC [51] LOG:  database system is shut down


FAILED
--------------------
Pups::ExecError: cd /var/www/discourse && su discourse -c 'bundle exec rake db:migrate' failed with return #<Process::Status: pid 351 exit 1>
Location of failure: /pups/lib/pups/exec_command.rb:112:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"db_migrate", "cmd"=>["su discourse -c 'bundle exec rake db:migrate'"]}
5d45286f87191249c9dd70efe5a844bf075d80419737a859b7e773b34fd74215
 FAILED TO BOOTSTRAP  please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.

real    2m41.851s
user    0m1.364s
sys     0m1.174s
[root@osestaging1 discourse]# 
  1. oh, duh, I set the hostname to the emails and emails to the hostname. I fixed it
[root@osestaging1 discourse]# grep -C1 -E 'DISCOURSE_HOSTNAME|DISCOURSE_DEVELOPER_EMAILS' /var/discourse/containers/discourse_ose.yml
env:
  DISCOURSE_HOSTNAME: 'discourse.opensourceecology.org'
  DISCOURSE_DEVELOPER_EMAILS: discourse@opensourceecology.org,ops@opensourceecology.org,marcin@opensourceecology.org,michael@opensourceecology.org

--
  ## Required. Discourse will not work with a bare IP number.
  #DISCOURSE_HOSTNAME: 'discourse.example.com'

--
  ## on initial signup example 'user1@example.com,user2@example.com'
  #DISCOURSE_DEVELOPER_EMAILS: 'me@example.com,you@example.com'

[root@osestaging1 discourse]# 
  1. this time the build succeeded!
2020-03-16 16:22:07.649 UTC [51] LOG:  database system is shut down
sha256:4d92ff0b76a725a5252fce8567e961fc01eebe68c2b34d1abc9c94cae041597e
9590f6a4719a41690b90cd9a5467460b0efe978927d6152cd9a83d4d4f4d0217

+ /bin/docker run --shm-size=512m -d --restart=always -e LANG=en_US.UTF-8 -e RAILS_ENV=production -e UNICORN_WORKERS=2 -e UNICORN_SIDEKIQS=1 -e RUBY_GLOBAL_METHOD_CACHE_SIZE=131072 -e RUBY_GC_HEAP_GROWTH_MAX_SLOTS=40000 -e RUBY_GC_HEAP_INIT_SLOTS=400000 -e RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.5 -e DISCOURSE_DB_SOCKET=/var/run/postgresql -e DISCOURSE_DB_HOST= -e DISCOURSE_DB_PORT= -e DISCOURSE_HOSTNAME=discourse.opensourceecology.org -e DISCOURSE_DEVELOPER_EMAILS=discourse@opensourceecology.org,ops@opensourceecology.org,marcin@opensourceecology.org,michael@opensourceecology.org -e DISCOURSE_SMTP_ADDRESS=172.17.0.1 -e DISCOURSE_SMTP_PORT=25 -e DISCOURSE_SMTP_AUTHENTICATION=none -e DISCOURSE_SMTP_OPENSSL_VERIFY_MODE=none -e DISCOURSE_SMTP_ENABLE_START_TLS=false -h osestaging1-discourse-ose -e DOCKER_HOST_IP=172.17.0.1 --name discourse_ose -t -v /var/discourse/shared/standalone:/shared -v /var/discourse/shared/standalone/log/var-log:/var/log --mac-address 02:fc:97:b8:b4:0d --cap-add NET_ADMIN local_discourse/discourse_ose /sbin/boot
12bb1e40517bb4893ff428096fa204f145c75d64be6a269cbe3093543373c6a8

real    8m57.076s
user    0m2.275s
sys     0m2.021s
[root@osestaging1 discourse]# 
  1. After that, I had to restart nginx; I updated the documentation
  2. Nginx failed because the log dirs didn't exist; I added that to the install guide, restarted, and I was able to access the discourse WUI!
  3. Next up: let's see if we can do a restore
  4. I did it, and it worked! I updated the documentation with a restore section https://wiki.opensourceecology.org/wiki/Discourse#Discourse_backup_file
[root@osestaging1 discourse_ose]# [ -f /var/discourse/shared/standalone/backups/default/ ] || mkdir /var/discourse/shared/standalone/backups/default/
[root@osestaging1 discourse_ose]# cp discourse-2020-03-08-172140-v20191219112000.tar.gz /var/discourse/shared/standalone/backups/default/
[root@osestaging1 discourse_ose]# ls -lah /var/discourse/shared/standalone/backups/default/
total 56M
drwxr-xr-x. 2 root      root 4.0K Mar 16 16:52 .
drwxr-xr-x. 3 tgriffing   33 4.0K Mar 16 16:52 ..
-rw-r--r--. 1 root      root  56M Mar 16 16:52 discourse-2020-03-08-172140-v20191219112000.tar.gz
[root@osestaging1 discourse_ose]# 
[root@osestaging1 discourse]# /var/discourse/launcher enter discourse_ose
root@osestaging1-discourse-ose:/var/www/discourse# discourse enable_restore
Restore are now permitted. Disable them with `disable_restore`
root@osestaging1-discourse-ose:/var/www/discourse# discourse restore discourse-2020-03-08-172140-v20191219112000.tar.gz
Starting restore: discourse-2020-03-08-172140-v20191219112000.tar.gz
[STARTED]
'system' has started the restore!
Marking restore as running...
Making sure /var/www/discourse/tmp/restores/default/2020-03-16-165545 exists...
Copying archive to tmp directory...
Unzipping archive, this may take a while...
Extracting dump file...
Validating metadata...
  Current version: 20200311135425
  Restored version: 20191219112000
Enabling readonly mode...
Pausing sidekiq...
Waiting up to 60 seconds for Sidekiq to finish running jobs...
Creating missing functions in the discourse_functions schema...
Restoring dump file... (this may take a while)
...
Cleaning stuff up...
Dropping functions from the discourse_functions schema...
Removing tmp '/var/www/discourse/tmp/restores/default/2020-03-16-165545' directory...
Unpausing sidekiq...
Marking restore as finished...
Notifying 'system' of the end of the restore...
Finished!
[SUCCESS]
Restore done.
root@osestaging1-discourse-ose:/var/www/discourse# 
root@osestaging1-discourse-ose:/var/www/discourse# discourse disable_restore
Restore are now forbidden. Enable them with `enable_restore`
root@osestaging1-discourse-ose:/var/www/discourse# 
Restore are now forbidden. Enable them with `enable_restore`
root@osestaging1-discourse-ose:/var/www/discourse# exit
logout
[root@osestaging1 discourse]# 
  1. I reloaded the site in the browser, and it kicked from the first-install wizard to the main page. I was able to successfully login. It works!
  2. I also added a section to the install guide for creating a backup, which just links to the "create backup" steps of the "Updating Discourse" section
  3. I crossed-off the "Test/document backup & restore process" item from the TODO list !!
  4. I still want to do another upgrade before crossing that off, but there's nothing now since we just rebulit fresh
  5. Next up is either vanish cache or unattended-upgrades. I think unattended-upgrades is a higher priority.
    1. rather than setting up systemd in the container, the best thing is probably just to create a cron that calls unattended-upgrades

Sun Mar 08, 2020

  1. after a couple months away, I'm picking up the Discourse install on staging again
  2. the highest priority TODO item is a means of safely cleaning up stale docker images before moving to prod. let's see if I can implement that now
  3. this is a great article that describes the issue and how to resolve it using the --rm and --name arguments passed to `docker run` https://rollout.io/blog/easy-container-cleanup-in-cron-docker-environments/
  4. in our case, I can't remember the last time I did a cleanup, but here's what we have now. only one running container, but another 29 containers that we don't need/want
[root@osestaging1 ~]# docker ps -a
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS                      PORTS               NAMES
7987b80223d8        local_discourse/discourse_ose   "/sbin/boot"             2 months ago        Up 3 weeks                                      discourse_ose
5cc0db30940b        940c0024cbd7                    "/bin/bash -c 'cd /p…"   2 months ago        Exited (1) 2 months ago                         thirsty_borg
24a1f9f4c038        6a959e2d597c                    "/bin/bash"              3 months ago        Exited (1) 3 months ago                         peaceful_leavitt
6932865cc6a1        6a959e2d597c                    "/bin/bash"              3 months ago        Exited (1) 3 months ago                         friendly_grothendieck
fce75ef5ce06        940c0024cbd7                    "/bin/bash"              3 months ago        Exited (0) 3 months ago                         gifted_booth
03ea184c205e        940c0024cbd7                    "/bin/bash"              3 months ago        Exited (127) 3 months ago                       clever_solomon
6bd5bb0ab7b5        940c0024cbd7                    "whoami"                 3 months ago        Exited (0) 3 months ago                         upbeat_booth
4fbcfcc1e05f        940c0024cbd7                    "echo hello"             3 months ago        Created                                         sweet_lalande
88d916eb12b0        940c0024cbd7                    "echo hello"             3 months ago        Created                                         goofy_allen
4a3b6e123460        940c0024cbd7                    "/bin/bash"              3 months ago        Exited (1) 3 months ago                         adoring_mirzakhani
ef4f90be07e6        940c0024cbd7                    "/bin/bash"              3 months ago        Exited (0) 3 months ago                         awesome_mcclintock
580c0e430c47        940c0024cbd7                    "/bin/bash"              3 months ago        Exited (130) 3 months ago                       naughty_greider
4bce62d2e873        940c0024cbd7                    "/usr/bin/apt-get in…"   3 months ago        Created                                         boring_lehmann
6d4ef0ebb57d        940c0024cbd7                    "/usr/bin/apt-get in…"   3 months ago        Created                                         loving_davinci
4d5c8b2a90e0        940c0024cbd7                    "/usr/bin/apt-get in…"   3 months ago        Exited (0) 3 months ago                         quizzical_mestorf
34a3f6146a1d        940c0024cbd7                    "/usr/bin/apt-get in…"   3 months ago        Exited (0) 3 months ago                         epic_williamson
f0a73d8db0db        940c0024cbd7                    "iptables -L"            3 months ago        Created                                         dazzling_beaver
4f34a5f5ee65        940c0024cbd7                    "/usr/bin/apt-get in…"   3 months ago        Exited (0) 3 months ago                         quizzical_haslett
0980ad174804        940c0024cbd7                    "/usr/bin/apt-get in…"   3 months ago        Exited (0) 3 months ago                         wonderful_tereshkova
79413047322f        940c0024cbd7                    "/usr/bin/apt-get in…"   3 months ago        Created                                         naughty_proskuriakova
ba00edad459a        940c0024cbd7                    "sudo apt-get instal…"   3 months ago        Created                                         quizzical_burnell
7364dbb52542        940c0024cbd7                    "sudo apt-get instal…"   3 months ago        Created                                         cocky_bhaskara
9d0e485beba0        940c0024cbd7                    "sudo apt-get instal…"   3 months ago        Created                                         nervous_greider
75394a9e553f        940c0024cbd7                    "/usr/sbin/iptables …"   3 months ago        Created                                         admiring_cori
8c59607a7b23        940c0024cbd7                    "iptables -L"            3 months ago        Created                                         silly_buck
92a929061a43        940c0024cbd7                    "bash"                   3 months ago        Exited (0) 3 months ago                         sleepy_cohen
0d4c01df1acb        940c0024cbd7                    "bash"                   3 months ago        Exited (0) 3 months ago                         busy_satoshi
3557078bec62        940c0024cbd7                    "/bin/bash -c 'echo …"   3 months ago        Exited (0) 3 months ago                         busy_sammet
56360e585353        bd5b8ac7ac36                    "/bin/sh -c 'apt upd…"   3 months ago        Exited (100) 3 months ago                       youthful_hermann
53bbee438a5e        9b33df0cef8e                    "/bin/sh -c 'apt upd…"   3 months ago        Exited (127) 3 months ago                       awesome_newton
[root@osestaging1 ~]# 
  1. note that the actual discourse container has a non-random name "discourse_ose", which suggests that *it* is not going to have this issue. Most of the other ones were probably crud left-over from when I was testing installing & configuring our discourse container in the past. Anyway, that's just a byproduct of maintaining a server running docker, so we need some cron to clean those up. Maybe just delete non-running containers that are older than one month once per day? Let's first see what others do
  2. oh, note that the above article only tackles cleanup of containers, not their images! we need to clean those up too. Yeah, look at all these 2-3G images we have lying around! There may be some dedup going on, but we still need to address this before prod
[root@osestaging1 ~]# docker image ls
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
discourse_ose                   latest              f360219e7107        2 months ago        2.36GB
<none>                          <none>              902ab1153546        2 months ago        2.83GB
<none>                          <none>              6a38ab892c66        2 months ago        2.83GB
<none>                          <none>              a6c08aea996d        2 months ago        2.91GB
<none>                          <none>              16fedb01ee11        2 months ago        2.83GB
<none>                          <none>              934235cef44b        2 months ago        2.83GB
<none>                          <none>              b9124ece424e        2 months ago        2.83GB
<none>                          <none>              b60e76b801d0        2 months ago        2.83GB
<none>                          <none>              684c8db14460        2 months ago        2.83GB
<none>                          <none>              4733e46731f5        2 months ago        2.35GB
<none>                          <none>              d2ba235e2052        2 months ago        2.35GB
<none>                          <none>              0af96397bca5        2 months ago        2.35GB
<none>                          <none>              e47e791da20e        2 months ago        2.35GB
<none>                          <none>              dab237dcc410        2 months ago        2.35GB
<none>                          <none>              6308c8322205        2 months ago        2.35GB
<none>                          <none>              8f7a542c265f        2 months ago        2.35GB
<none>                          <none>              fad2f7a939c2        2 months ago        2.35GB
<none>                          <none>              5370c55fc784        3 months ago        2.35GB
<none>                          <none>              34a0a074eece        3 months ago        2.35GB
<none>                          <none>              07a20bde2e0a        3 months ago        2.35GB
<none>                          <none>              6a959e2d597c        3 months ago        2.35GB
<none>                          <none>              35f9c9c6ae3b        3 months ago        2.35GB
<none>                          <none>              7777e048cc05        3 months ago        2.35GB
<none>                          <none>              c747ce58fcac        3 months ago        2.35GB
<none>                          <none>              29636507e88e        3 months ago        2.35GB
<none>                          <none>              33bf827f17c3        3 months ago        2.35GB
<none>                          <none>              2a321b9e0983        3 months ago        2.35GB
<none>                          <none>              ed80d37d2677        3 months ago        2.35GB
<none>                          <none>              afad0500f490        3 months ago        2.35GB
<none>                          <none>              56278b2b4bcc        3 months ago        2.35GB
<none>                          <none>              881ad5ed562f        3 months ago        2.35GB
<none>                          <none>              7e7f7a9077e9        3 months ago        2.35GB
<none>                          <none>              9a9c4a518757        3 months ago        2.35GB
<none>                          <none>              397a2c78054e        3 months ago        2.35GB
<none>                          <none>              660cfc56e280        3 months ago        2.35GB
<none>                          <none>              5de79d28b55d        3 months ago        2.35GB
<none>                          <none>              dc66c61d6acd        3 months ago        2.35GB
<none>                          <none>              495521de5d64        3 months ago        2.35GB
<none>                          <none>              b145fe65f825        3 months ago        2.35GB
<none>                          <none>              751585c2431f        3 months ago        2.35GB
<none>                          <none>              c1dcaf11330b        3 months ago        2.35GB
<none>                          <none>              8a147f48527f        3 months ago        2.68GB
<none>                          <none>              70711b14dbf9        3 months ago        2.68GB
<none>                          <none>              8151854ae378        3 months ago        2.68GB
<none>                          <none>              fc30f83bb8c8        3 months ago        2.68GB
<none>                          <none>              84780c222671        3 months ago        2.68GB
<none>                          <none>              0e0f932cf744        3 months ago        2.68GB
<none>                          <none>              e19fc0a3c80e        3 months ago        2.67GB
<none>                          <none>              099dec231763        3 months ago        2.67GB
<none>                          <none>              e97869b53840        3 months ago        2.35GB
<none>                          <none>              8a85cff68122        3 months ago        2.67GB
<none>                          <none>              2e2990c49c1a        3 months ago        2.35GB
<none>                          <none>              f79e947e60f1        3 months ago        2.35GB
<none>                          <none>              0f4f91d602b2        3 months ago        2.35GB
<none>                          <none>              3afc54b72d62        3 months ago        2.35GB
<none>                          <none>              3f97048def9a        3 months ago        2.35GB
<none>                          <none>              bd5b8ac7ac36        3 months ago        69.2MB
debian                          buster-slim         2dbffcb4f093        3 months ago        69.2MB
discourse/base                  2.0.20191013-2320   09725007dc9e        4 months ago        2.3GB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 ~]# 
  1. looks like even attempting to delete the container *and* the images is *still* not sufficient to prevent disk fill issues in prod. We still also have to run a `docker system prune` job https://stackoverflow.com/questions/39878939/docker-filling-up-storage-on-macos#39890025
docker system prune -a --volumes
    1. so this says we need to cleanup exited containers, unused container volumes, and unused image layers. Each can be cleaned-up with these commands in-order
	docker rm $(docker ps -f status=exited -aq) # remove stopped containers
	docker rmi $(docker images -f "dangling=true" -q) # remove image layers that are not used in any images
	docker volume rm $(docker volume ls -qf dangling=true) # remove volumes that are not used by any containers.
  1. I also checked or docker system disk usage, which shows about 15.5G of usage and that 97% of that is reclaimable!
[root@osestaging1 ~]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              61                  3                   15.09GB             14.77GB (97%)
Containers          30                  1                   495.5MB             353.2MB (71%)
Local Volumes       0                   0                   0B                  0B
Build Cache         0                   0                   0B                  0B
[root@osestaging1 ~]# 
  1. the docker system prune command does take a filter argument, so we can use it to, for example, only delete volumes that were created before a given timestamp. Unfortunately, I don't think we can use it to find if the volume has been unused ("dangling") for longer than a given time https://docs.docker.com/engine/reference/commandline/system_prune/
  2. this guide proposes a cron job for cleanup of docker, but it's only using `docker rmi` and `docker volume rm` (so missing stopped container cleanup and not using `docker system prune` https://www.techrepublic.com/article/how-to-run-cleanup-tasks-with-docker-using-these-4-tips/
  3. there's this repo for a docker garbage cleanup cron tool, but it hasn't been updated in about a year https://github.com/clockworksoul/docker-gc-cron
  4. ...
  5. it looks like the recommended discourse tool is `./launcher cleanup` https://meta.discourse.org/t/do-you-routinely-do-docker-system-prune-or-anything-else-to-maintain-your-discourse-instances/141708
    1. I gave it a try. It took a few minutes to run, and claimed to have cleaned-up >10G
[root@osestaging1 discourse]# ./launcher cleanup
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
5cc0db30940b9009d84ccd6a3bb625140a0b79630e6e2fbf3bef1fa7b0698ce6
24a1f9f4c03884a3a28b7cdc3586af51e17c518675e74140b2e23e1349bcdc3a
6932865cc6a1417cd7d5e7b9b2414c4bfd8a0ce98fed8dafda5c96bdddeffdb1
fce75ef5ce067dd3304424d84e10e32d3f069f13e85304781f3ebcc6e3f0b858
03ea184c205e095cb23e9f60cdf82ad3307969a71adaa855d42ab12caa279048
6bd5bb0ab7b52ca9359b8ffc0d74d93e6f65bb82f8d71a9286288bd3e12c1565
4fbcfcc1e05f6dfe5b65a5a39b9000008e2443fbe334b18a49c106d1ef0b9936
88d916eb12b078093ad3fa6b1d54ebf9ea24bcde56f81dbde0ced4d5cb4904ba
4a3b6e12346044d3f9babf6304eb1ad2d6feee02bfb52f12a118ba39d273d973
ef4f90be07e637a74c71196a13bfcbf43a70eac4f8568fc54ee8e8728ad76765
580c0e430c47f1b39ef39783d25edf6056c62d083c473d4af49197e4af129e0a
4bce62d2e873df90c3b4bebf2665e0af8ce9b7131db346da6839d460bf1f4c74
6d4ef0ebb57d4d81b330cd3ae9d82687365235329546b348390708d47c75e08e
4d5c8b2a90e069712917bdd9b8e71ca1f4517f7a0718a540dc65604190882a43
34a3f6146a1d98d792014c243a885c8cba209c675a1b2d2c048e0a9685d89cb8
f0a73d8db0dbee4b2109de245329e564c1098d5def46fc7c65a862995a457d24
4f34a5f5ee65c78fda0b7505ec9a3cc2dcbc457674abfdb7c8a700a4812c556e
0980ad1748043245713eb54b00f89eb8b9251d44e33f1bf6906d1c5206f7b39f
79413047322f9a06186c6adc8856199acbd67ff1ba6bc65f34aae2277ce68082
ba00edad459a1b4913e603773d2850f38c084c30adecc1e80e903b792fdd955e
7364dbb525424591141495a4f7200910b847671cbd81b2bc8995e21bac63e643
9d0e485beba030a0220c95f574a1f6ccc4d97044ac7bb9b79ba8cf4e3e7b6ff2
75394a9e553fc67ab7d2c2550a00673ce7b282fc432b31d4f63b9d142c1d6398
8c59607a7b230176ae7fbab5dd08ecddc218c5d9907d8eb45e01152707275257
92a929061a43bb043ea62e192608a1a3db37ac4453734eb7902868cee7eb188c
0d4c01df1acbd711c14c79fafe77c65e4aef030d17cb0bbb19cfaddb8ad7350b
3557078bec62b7a032b1293192695bc93130e0aa81cec231f1a7a1b54810838f
56360e58535395a6b696b302b5b6846b96d6e42cff1a643a9ffced0b3d38bcf8
53bbee438a5ede217c4c250aab01629eca5d9ca4423790476d8449d14a9feaf0

Total reclaimed space: 353.2MB
WARNING! This will remove all images without at least one container associated to them.
Are you sure you want to continue? [y/N] y
Deleted Images:
deleted: sha256:7e7f7a9077e904463bf6f7a8ca0d54aa01a8c5a9ad4031fd9dfb99736cbe1a7f
deleted: sha256:114ff72b4584d2f697dec2f7e7ce386b6d6c0d5941714997160941acf50ddb35
deleted: sha256:29636507e88e3b16f1dc72c4914ab6923c4e17d03363402f9d8dcea373fd0d89
deleted: sha256:0094afcb39a6faa3cd2cfee9f062f8d643b084bb79c5942d1965ab5d010e2485
deleted: sha256:5de79d28b55ddc9196b6c5901fe7b66cf3d2261b5146e4a098c3383e2b6f0486
deleted: sha256:9e09cd21f7acdae8c304da3a6d641868c492adf2928edf42b06bd4e04dca14f9
deleted: sha256:6a959e2d597cca7767c3cf0dd4039486387b1b3423d1d54b38ec6156dffe2289
deleted: sha256:5b60dd031f2637ddea6e2831f0ea59b240a0f77c94c5b0c2a31f1e4fa9bd9102
untagged: hello-world@sha256:c3b4ada4687bbaa170745b3e4dd8ac3f194ca95b2d0518b417fb47e5879d9b5f
deleted: sha256:0f4f91d602b27d1a17719a5a9243c51153eea5ff43d5fcc06b10ca40fda3c0db
deleted: sha256:ba7fbe7f067d1ec9029dbaf510bdcb0a0fd15c9c3d3595d9ed09c3bf803d2723
deleted: sha256:dc66c61d6acd38455cb1e9edbcaa548712b8b218a738aa5c1b3a48d8c0facce3
deleted: sha256:40d0aebfb1f0b45f446aa2096f90c1b061c8f54a47bf4581e785f9dd3dd141d0
deleted: sha256:f79e947e60f183b10d81867720f358cdfd7ddd94168a78769d121ee6e9c127a0
deleted: sha256:46bcbbe21f22ce90bc4dfcd686825e0d2d8632d19b5d2b098af3132584ccdede
deleted: sha256:0e0f932cf74405014adeb442ca5eb08ac183ea09e24902208fd758a909d03574
deleted: sha256:28ad1f6a70e0c42b0552a6b7232298b2dbda4f3c303d7dbeb40bd8cdd3dee49c
untagged: discourse_ose:latest
deleted: sha256:8f7a542c265f5c2e5b098f494651446aa5e29c32c395bc70860e213f3ea51ae1
deleted: sha256:1c97d168c4746ac927b380b234dfa49be9f3dcd4da27111dbbb44bd427ee5ad7
deleted: sha256:397a2c78054ecb020c2604f9f831fe8253daef5870712f7c475eb8436cb2d9b1
deleted: sha256:3e71b16542d1e22c49856222454d8c3bc0d15b79fb71a6434d4885c437531378
deleted: sha256:afad0500f4907f6a6e789693b89a604e2556a19f335b011e784dfa6711aff3c7
deleted: sha256:ee35d3306fb4a195012e46ab90ccb98e47bcf53bda39b477650794d690e9f398
deleted: sha256:3afc54b72d62f5e5a12fb54dd9992c28982e9de887bba05398bb9d3c0e21fb5d
deleted: sha256:4ae0553a8fe8bc4e57e873c0f62dfc3553dfdb00bcb1ee1b3a5f129e9fb9bfdb
deleted: sha256:bd5b8ac7ac363ff5501bdd34aa55aa6037af2d4dbab16913562f801e982eddf3
deleted: sha256:e6578a91039879307f0590d07efbe3cdb646ab279efdd0d199a3a24d023b89b6
deleted: sha256:e9f68a80c9c26688043456b916cfea2dfe5f53fe0b3e6ae455971a72eef6c59e
deleted: sha256:552d7daee7a141037a393133687711f2d4dff36f29647cba43a6feee134c72ee
deleted: sha256:c11a7e0c69f4bf54367ac2654f49150dbebb103236e20c373608f1ec47cba056
untagged: debian:buster-slim
untagged: debian@sha256:7be5614ec5571593708ea1a3e161aa6ac44066085fd732a61af9e3880222722d
deleted: sha256:8a147f48527f6cb79120f9c9031a95208b16750917b03a364c96966a87566f00
deleted: sha256:a67381da7d48fc0d9425c0ca593e82969a8ab175088fffece0ec07ecadaffcc0
deleted: sha256:4733e46731f5ef4dc0035df622e91b346677ab5677d81d05d22bcb3dcf857e69
deleted: sha256:bac842b447a14dabf1c9e2eb338189c5a9dd5fb7e09282aec353b4bdfa01f9b3
deleted: sha256:2a321b9e0983a5134e9ca9459d4fc31b83eedb209ecf2a95697de6ed78e87542
deleted: sha256:4cdbb85e7f3a326e9548a80b9e5360ac5e477c7e9c740cbbc840f73af8e61d6b
deleted: sha256:6308c83222050ea789d62a4421d985515a50868a3aa9a02066e91abcf9cf626e
deleted: sha256:a460db4424122960164cacf2377ec03b703ad385cf1a2e86c6a835d95539c442
deleted: sha256:56278b2b4bcca91689f939a5b93a7749d12bbcb74ff787fe2bc51eaece55613f
deleted: sha256:2ba9818b114e55c841f3490f9c29b71e2c6de5802aa8b6815d8034bf43696a21
deleted: sha256:34a0a074eecea6dd55236c1e07f1bf279d9ac568f8c153f3170754fb10a3266f
deleted: sha256:1d67c4207e6bf4e86aaaaae7fb47d2b3a946ec9c61e9a0fb4f111b5b0a2906c3
deleted: sha256:fc30f83bb8c88839bbf4318cd5f6d101401ce4895ef0411354043f80035785a0
deleted: sha256:3d202e4a53430fb9647db2ca007f57bd42b70f164129d340a391c95aadccf60f
deleted: sha256:dab237dcc410c3b57db5b1d9224571b1035e98059a76820774071a1142d0efef
deleted: sha256:f1cdccb063069792bad6eda463344ef7f9978db850291035f311385e6ddced97
deleted: sha256:0af96397bca5fcedf0e64ba7cad2754f76680c73e62dc4e0de922d9d37ff2049
deleted: sha256:3b4ac336562a964c0f9296175feafeb137394af2a1d96df1e4d0e961cffc95bb
deleted: sha256:fad2f7a939c2341de15c9382f9bfb0566cffba4dced2abb5d5b0b710a8189c63
deleted: sha256:61c4edf8dc2d0a95982f7bba89bd33eb0448ee8a55a0e5f47d77c4620b91a1e9
deleted: sha256:8151854ae37890cef95d1a21eda647f15c037290952d9d77142129e15ed8dc60
deleted: sha256:7a831f682b3d766534f87fb89f05a61f65193898b0f47a5a1c6dac571714e443
deleted: sha256:b145fe65f825054a0c63ab4abf74a1663d8f2b3f1b9735c3eb0814b512766ea3
deleted: sha256:7137b88f93b5dd04337204f75cc0c33f3a78bbe615c765056e77e5179eb6d03c
deleted: sha256:2e2990c49c1a20cbf0ae5e160e124c5bc25790b2d584a65b601964d5540c34b2
deleted: sha256:24776019dbb3037ac292f32bc2514fa94fe44b5d08d0b09e5a73866e008d5fc8
deleted: sha256:a6c08aea996d501b8d05c5aa1c38de8ee52da0e204b188e7361f0dd945735e17
deleted: sha256:7170d15c026ced9095cbfb1a0b152213e2c7ad4c9053ce799d07d614e2b13486
deleted: sha256:684c8db144601ecfc6353f0be157035753bbe4515d28b2d7b570f26862f7f076
deleted: sha256:f4017ba4427f13727ae28351d5e302dd40bc57305a736c8b1628cda11b029af8
deleted: sha256:495521de5d641a36d6e61b1d620e2203b14b3f47f726072266dda6035afa0d24
deleted: sha256:e99e7faf02ee114bc9a4bf95d8cec58c0fae9652955ada8fe994e1a3a8e98e12
deleted: sha256:e97869b53840c43c8bd2b90b6898cbde5e8e6f208e7c8555c29bc862de6178e8
deleted: sha256:40d9acdd34de32944481b9f00d6becd7bcb37709ee0ce028d5249c631eb90a30
deleted: sha256:8ea02f13a5a175fc8c64a2da75aa5030644c5dcf86922b56a8ccc099fedab066
deleted: sha256:48ebf0548f35e46909970ae0047033e495ce85946ebfef6dd83b411ca9f185d5
deleted: sha256:633070d53e45d4d5ba59432686f2d2c54b639e38cbcb1351b6bce8680795cb58
deleted: sha256:f0866f161c0273d4d283f5f323becd498725583ad17ebe46ab536da1e95def0a
deleted: sha256:355031af936f78b0a8e3eba565edfb77ffbc164a4caa92506bb45ea1662a8f3e
deleted: sha256:099df9d6c06f9a6143a45a4fadb53109c75bde564c3c492fafffc20cce7f4df0
deleted: sha256:1ae53363669c9a1f7a9b6f4693813d707b8f82b8834851fbc303e48187d748e9
deleted: sha256:4c6bc4a4785bc1be298ffd8721724f73fd311c694664896a68d5e181ab402fd2
deleted: sha256:1997b22b65ff266e0e08de752f8089ff6565020567b6ad37ecd78323f9a19ffd
deleted: sha256:9d917394e3bfb4ed99a51a7a6842ae245118cd86f45fb2ae36f7f6c9dc9717f4
deleted: sha256:f3c3bf5c272867f28e22f7ca29e7711b3e5a95e3d10902a148de69b2099553a9
deleted: sha256:b548aa7b7756420496f6cc4d3e711534c408757a9b6c18ea98786a38b0133d27
deleted: sha256:8d469b1cb7e6a5abea421418a6b0ee12c187f01b7f2cdf3c7cbbdfbbdcb15a57
deleted: sha256:1df3afda3d01a3415ad9ecf96aeaf35190bda33a2f7f493f4e8a74436bf43270
deleted: sha256:33bf827f17c36d8cae3f86f3610e2209f912d5d59f1251f98440cf30319180f8
deleted: sha256:4212b112783361ce33fbbaad35da6e09cb48159f5880a37286dc5dce73b8742d
deleted: sha256:ed80d37d26774cf0d3a51cde9e08808814281f4ebb07cc7a8e44a7c1b333e3da
deleted: sha256:7c63447ddb116a2fb6f4f3bb022051f115c4eb0a5990582605f679de9735cfe9
deleted: sha256:70711b14dbf958f0e02d795e318de9992c69a9cd16eea30d4fa79381a648722d
deleted: sha256:aca8b6f87f72fcee3ab89a494f60ec14b97ab0445177a1666fb0e98d9a0c95f7
deleted: sha256:e47e791da20ea4a7cf28473edb9fc5d41124f9cd59accbfa992caee24b692388
deleted: sha256:e2774bc1244967183cc01fb3aba827e2686636f5a4beb7442008a63417d79dac
deleted: sha256:35f9c9c6ae3b75920a68ba4559554936312e095f8255cf55e1e2adab5cd823be
deleted: sha256:ca242151d6265aae939f536cef7c0d93b315da573ed8ab84a89e57eb04e610a2
deleted: sha256:5370c55fc784b37efd75c16a89de7580552c0b39d5a65f7d93006a70cbfb2f28
deleted: sha256:d9fa395d11ad8684c80680fda720a1aed8d2db9e3d53ead131af629be96fd94d
deleted: sha256:902ab1153546384a6bc78e4136684daca2a3f36d6035da24d9498b68581e43ec
deleted: sha256:7a1b6373fbe936ed93a2e855c13a06888ce83866a4b59c2f422fc824423887bb
deleted: sha256:3f97048def9ae8c80c689f8278db1f302cc6dedf611dbaa6a42d2ef600cf0407
deleted: sha256:90298062031e1ec7fdf62918eb03d16415b26bf5dd5415d8de08fa158ca5106a
deleted: sha256:84780c2226713ec9a8a1a01b95453adbacc78b3feee392afa742ba790c7a59d9
deleted: sha256:d1386fd7aaf714cce79fab825b99b24d03cddfbb250c162163c87df832a2acd1
deleted: sha256:6a38ab892c6606bb0971fa0d524eb292dd79cf3ee9335b7c06eafe08642ef4ed
deleted: sha256:a50d4b18d1abad08003676de5b307551fb93aa669338f3996516eae43e5fda92
deleted: sha256:8a85cff6812288ef1074a69866e21138996e28ee7de972a669e4763b775bd34d
deleted: sha256:bd7b74f0dba3d328a9be70eee02174e0dd99a16abe4dd24898d69735516b546c
deleted: sha256:7777e048cc0556f33c47fc37944f3f5cfae79eab9ebbfd2e51daa5d655ca7cd5
deleted: sha256:d695259d0690e7616ebe870422568d059df99b3f14e70aaf6cd1ba617851c896
deleted: sha256:16fedb01ee1113385d2bc535396b2f3968231560e3d31506fb4bacf803863f7b
deleted: sha256:bde83994707222bf52d66d94701f4e230eacc0bbfb46cd64f069d6649717bd1e
deleted: sha256:c747ce58fcacfe7e7a255c744a5074bba5ff177d490d6f7f9ae946e0c2481296
deleted: sha256:70fe26725a1b206cd27e92ae93c14a4376fe0e258c01f759d8ea63cc3c0543be
deleted: sha256:9a9c4a5187574421cafcabf8fb1b31e4d9165e05f1c39b6b06fe18d13b8ffd9f
deleted: sha256:82ae2fd5d6ef9afac4b3ecacfe694e678170f70d4156f3a9774496826c9a7b30
deleted: sha256:099dec23176381d506513afb716a5ced0292dd8c5232560c82f2d15af91ab075
deleted: sha256:f2e63200327aa5d18194eca4fff6979bf7ee3fa72e469b6ab0dbcf8ed7ea52a0
untagged: discourse/base:2.0.20191013-2320
untagged: discourse/base@sha256:77e010342aa5111c8c3b81d80de7d4bdb229793d595bbe373992cdb8f86ef41f
deleted: sha256:09725007dc9e197fd178dade194c5a9deb9669c4b45bb6e4a9ac9511be3cf9dd
deleted: sha256:660cfc56e280f622a860b07c41f9a545f2693f8a27e8485de40b4b041d73a172
deleted: sha256:e609e63f5673635a3934b2f081c9f654b3e35ad41081cebbe4489d9e6d78fec1
deleted: sha256:07a20bde2e0aeba371a13b5f86fc368431b258157a14e1a2c6b9bee539c61bc1
deleted: sha256:c82bb97461ac789ea2a41635ea87987588540fd75a16c617271dafbdf692304e
deleted: sha256:881ad5ed562f97ddbf5f7077f98c6d8f424bae5a741e84df630834ef8f479014
deleted: sha256:0115364c0ddc92eb9ecde0541380cac9804a03ca41084a4c8b0eeebe538bd849
deleted: sha256:b60e76b801d0fb2b87fd69da4da4318b0d5ed03b92f1f222b447372e80ddb55d
deleted: sha256:adf66c360e89160da050ec267a5a2e06bbe08425b3485a9450bb0498da6ca387
deleted: sha256:d2ba235e2052b4336c0884d7bd5a79ee579d90e549716c36ff7c1122d8db0fd2
deleted: sha256:18172e1bb91419b53bda37c1df91e2ba8d15d0c931eb8594376887407fe9fd4f
deleted: sha256:c1dcaf11330bf3fa34e47880ccf0549b5c4a5659b1ef78e9130086a6ff2e7a25
deleted: sha256:bbfc3a5eb94cfa09f7ea92accd1f28e0d066d72c459dfcfd7de007799c494e18
deleted: sha256:934235cef44b5b28b3dfaef481f73cadc650bb8d2780c34d6ff140267e807717
deleted: sha256:1ba7589ba8450b8954ccf017cc4fe4bd0991c3e8ae038f76ebb52da88d24085c
deleted: sha256:e19fc0a3c80e565e2bee4a87855feae64ec1a74a70f4170bd78aae87f299745a
deleted: sha256:20634a751ddf8d34acbd4e6be1188bd6097d427faeff75c7ecfd2d3ba6cca27e
deleted: sha256:b9124ece424e9e74b88d282a640287c8bb8db86f4aa46fa4c20a71a26f24f667
deleted: sha256:801103461e832b0f7314cffce409c7d3c19a4baab6a29ab40b3091371e1cdb1e
deleted: sha256:751585c2431f6c12107e25cecbd09468da8715588a428e8e43c76bcad0d487df
deleted: sha256:e931ba08437489570a39aca85105e1e21703890d1252129a3c65fc5dde7fd15a
deleted: sha256:940c0024cbd795d6f66c64ebbb1ab96b52a3051393d7d8dd85451f94b5f89c63
deleted: sha256:3453126b0244b585bc03de5958ce1b87080b5e61318f338a96f9029143b8a7e3
deleted: sha256:ff3282a07057f50721341f3209f96c959a001d499b0898080e359ea051c132ed
deleted: sha256:a0248d7738fa45298d3f32a81e4a8496ad5f3dd50546b968b0292b8e83d647ce
deleted: sha256:fbc94b192519306b308305b99ba2b7114b3679f81c770e23023033a194ae682d
deleted: sha256:177a3ba5638360ac74a98753a01dc94a401a3342b30430a69a1bc60de6131309
deleted: sha256:08a4dd18ceaf412d96924f73b3e5163203bdf23a841f327096be209737923e9b
deleted: sha256:b3821941e9fa7ffb0eba3da911af6a7d1ff008f0d69398b6da5a5e15ce369fea
deleted: sha256:712c2014b1a18207b1bd7981188f322692a04765fb822dc7f966c36432d9741d
deleted: sha256:acbd0b496e78c5cc08f49eb1e502ec2b8c223301d53f978fdb19a0fab1449213
deleted: sha256:fb25b16cdd97ca1f40e6fe8986e370c109f4bc568cf22f4492fbee51fbf3ec5e
deleted: sha256:4661871816baeeedef52627da420a1c170f659749fe8679c739e7f82f41dd9c6
deleted: sha256:b5649fbe979f5b078bd675a44e28570a4a59b959ac9f9545c95a78228abe64d0
deleted: sha256:9b915d5f4ce969cac092b033e9ac21fd6e45b624605b2fd0b69546fd6afb5136
deleted: sha256:e958ab93809edf415cf201033b4c2d864f7ee76b5073896ddf9e3bd620568e09
deleted: sha256:e442e62020ded979e771646dbb9b9b3e9f8e962a21032a60d0414b5732e28e37
deleted: sha256:af413de85e107d21347b90bbe29dc4dd42675b1cd259cda4cbfcaa30937b6f4c
deleted: sha256:a10df0a71692ea82e3aa781a320fbdde38427fc8007855e0cb14b87aec9f37ae
deleted: sha256:fd9e17c86b66ccfc1977432a06da2690a4876bea6d673e76b70fa02c4d11e38a
deleted: sha256:961f64a2735d748673ce387183fee1e049cb4ada5ce4edc5ad1e1a1e69a30a90
deleted: sha256:5096b249d4fe5d39c5102c3bc988157b3a754cd135e833566f412e1e4adf3b1d
deleted: sha256:cd31d4fb8e0ff074460b1303a4de0b31ceaac41514de99126a7c64f0609c62bf
deleted: sha256:38cf8bebc30c59bd6a36234423813ae39b0aae241f23d631279ad62e8dbe8020
deleted: sha256:767d1c63c834c0b68f97d7753351d36c42db87988010608716835e4d2b6af916
deleted: sha256:c26a0b424f83d38c9081601f8b46d81e505767daeb5258beb2d12cb9680b7652
deleted: sha256:ffb8167e3c33cdcd27401665804d857d78977c1a46940b86dea1b6c71bc25814
deleted: sha256:3bc0e4fbbe2dbdf026dcd85bf0fb890e3078e0457e3e8969c2f957ccf2529250
deleted: sha256:dc9d4bde2b3ba5af9c1a26f0f3c4a99f87e9e3c935101eeacc1fb12a2c5120c2
deleted: sha256:54a9334b3ab3971a2a5fba636935437317180b01326142751c46a54d6afc5e84
deleted: sha256:1e38d9afd848a0b3d4794b3bb401c113fb7e7ec0fceb6c3366c156db8b6f9be5
deleted: sha256:c908814cb552f44824f2d55d291dfee9b2d20e746526f0130720cec22f883a05
deleted: sha256:b1f284d22df0b3c6908d7a26c43c5da54cf2a7bab0ab1bb5d9d74db17167b16a
deleted: sha256:0217f6d8f213ad8532c829243a0125f513330a4b7e1db41ad95cec38a6058832
deleted: sha256:33d5ac1480373ce81125d62d707a6f0891bfeae1ec799fcb334ece7c1689e2cb
deleted: sha256:b7f9577cdfabaef02fbff46dca5dd5ede1f0642686e736796c23dd578acd8d48
deleted: sha256:60751788448bfd4046059f4ab5df06b7eb87367b49e482a807f547746e7fa987
deleted: sha256:230fd30681f2c21377e300142962f8f50b703faf8c89ec8a4f450acf902a5209
deleted: sha256:7b148187b8946de24d30fb1561cfafea1607305d3e058e7341ae92546c60f3d4
deleted: sha256:dfe9e8b7c1fd8fa51ff1b62f48dd1cd4cd28f15afdaa5ee730a1bc2da8d0ffc6
deleted: sha256:34c1540af5a119f8ccdbc8eebce07e0ffc64f98fc762e25b56964bf2d83c1592
deleted: sha256:e2a1a1d2025a6983ac4082700cc5be2a23b6d35b27eacf0fb9a95aa031fe56ac
deleted: sha256:a0ff7a4d601e063562785e0112666799c0d128e1e56293d0ef6cfe7a312c4671
deleted: sha256:719a556aa66c77b03515fedc6406c2ba73ccff59c7ebb84ea456c219ab0f4679
deleted: sha256:c1f96132d209e67735495c732724ccb572cd49efadc5148582027732ae5af45c
deleted: sha256:1b019970321d083ce927534afb3f1e85e2e7c64a3b0adeb708ca8ca8986eaf43
deleted: sha256:5efdbef7c1bec767dba9d51965753eeed6a45fca63006cc52fc39131b110efe5
deleted: sha256:129a01d12ab6358d9379a41c6be7b7fb470951b344e25af5978767da70b75e28
deleted: sha256:7462fb220f7a0b86acefc5b8238d8cf2bbd34a5c4be9c6f42fbea1ca57fd8258
deleted: sha256:9bd4b7fffc435c3a582b9040f694591c2e81af45aa308c57b6a17e33e3ad5b80
deleted: sha256:e9892525900da137100afb571b29ae2ebb41be721b5964b6e11cbd33046d5b59
deleted: sha256:fcc408e688a07a4024d41742573d0d6a0eca0e2a1c0af492859636567c6db739
deleted: sha256:1377b6be5c4a686d39aa09f42b8ebd7d36900815d0ab280f1d2058a24fdcc74d
deleted: sha256:6dbdb4c71ae98ebc47a230a51fb96253b8db12242bbcc447c1aecd6a0af410a9
deleted: sha256:42012cf175abd48ba94f9df1b04c014d217429f75a2385482d2c45470de46ae6
deleted: sha256:2e2f69002b5cee5f4c4c88cb44e5e632078b0dc479de09b157fdf7cbfdffa52e
deleted: sha256:899cd9ccf1f87ee375688cd19f8e2e5035392860a7d310c14c320754ba80aa2e
deleted: sha256:c587eb5ff96bf02cebad9b54e9fc457e71fce72d8590daad0906478b17a9f16c
deleted: sha256:d4b3f5a89965abbaac6b6b30c443d02adcdc899683eb9135586f1d8e2ce49faa
deleted: sha256:ea2d5ac9a75888dc7268c9bf492c241f110b251ad819586e973f71abf91cdf61
deleted: sha256:b896c48d7c6f2e4a1c244fddb7541b8cdb2540f39c9a898ff65d12dd640384f3
deleted: sha256:d024738357e9fbd088efcf0c03a6f9d10643d84c352b224548fdeb44da8241d9
deleted: sha256:8908cd5d0c10b29a2e0a1cf0365d8938267f8e16c2d59793fcf83d1bf954690e
deleted: sha256:189b9f2aaaabce549d32d1c105e7ca4182ae8aa1f9c0619db73794060576a986
deleted: sha256:b018fe4af4ec355d6b39f2e3817bd7949306a0141cb62e3c6d0987cc1a47ac09
deleted: sha256:c7b25eb7715f2c1cd72ba6a8251c96479d185bb272816928b1893804e0886615
deleted: sha256:7a8df2ca5962f1bbd16b926742cefc649354888afd6253cb801d1765c0822cd3
deleted: sha256:7d8bb77f1f8c7c5a24ffb1fe7e945040d0a9b8b69997c0affeaa828a700f660b
deleted: sha256:70e98420ed50be6285e90660f903f4e4edef1c6e6a135732b55a3fdab1fd7a3e
deleted: sha256:59bbe50106ccb72c2de9a940af4e2631aabfba3b4cd83efc4b81ca00c9a2f0a8
deleted: sha256:efd1290a705be4202d8dc4445fc0fc9309c01ad6b1629b0cb0f5faef3f3f0792
deleted: sha256:ea0e98b71c77731370435afa586bc74c21b826fd6bdcb888c6d7f5f135c6c4ce
deleted: sha256:33b18f943c7dce3d0811f79b7d4c7e8ca80d801f7b3cb6056f85428e417ae115
deleted: sha256:a78be021adc46991b4a5d99a4865a0642c408c90dd95b90d29ee3c8d015adbc3
deleted: sha256:9e3a59520fcfc2ff57f88308e09b2f59e417f5b3d726f7776ca94a8971c214f7
deleted: sha256:8076b93195d0711e560209081e3ad77d0dccf08333448619d5d2d03966631e2a
deleted: sha256:3a16e4e0c419b6dd73bbbce76cb2713399dea710150b5b52cce840882250a38b
deleted: sha256:292051b5a1986e0537cf4fdf0d65a7bb33a5a34e51179f626e445e3e5c54d250
deleted: sha256:d2fb3e6ed6c929b4d4e2281bca02e7da216232a27f31ba7789f1e682898444ee
deleted: sha256:f587d985c884ace7085b19c56945f3d6844688eadae27b7658ea0b791d862b2b
deleted: sha256:b91c7ad6cecbfb9e601001d9930264184810032037ee1cbfcdeafc95fa8deca4
deleted: sha256:55d2b1c8d45ec97aacd6a42922b2c0125ffeb0fcafe4174f5fecd1bafdcc5655
deleted: sha256:3b7e977ba10bca74683e7ef6f57aaa6ad3bdd0d3f1c76a9e22fc8208d6a92379
deleted: sha256:d451e222bab197d9a68487c0ee44a833e55a6e95aa40c85dec6b5292a7bcb407
deleted: sha256:df42a58b023c90d448560f91340b1af56ed41d55fa3784bdcc3ebdeacc2055ce
deleted: sha256:2a0e276f0f7580efe6b3414024f43ad4aa75350d5e519a6cc01c701535c102ee
deleted: sha256:a608f8953be8b41dac818ab61d9cdb109fcb5b475f322c9da79b2ef974c4b9b5
deleted: sha256:ebcca2e3aadd0542fab61f94ac1a11dbbc3b348468c1651b8a0852bafd246677
deleted: sha256:d931365e2359f576f7de34eade7fc5da5e0cf8778a83db35a5acb3c1c1a65d44
deleted: sha256:5101529492de945469c2c3a40fa094ae38b55607fb865819fa1b9d5d198bc586
deleted: sha256:d2f6b12d5548096227befe445225e8cb0f3d1947810f1a287549409c8590beb2
deleted: sha256:bdb63b738197449916570011c9eaa105104200238e2b59ed3e3c91baf58d7239
deleted: sha256:fb04955fa5f7edafd8b647d0b89f50d9120817a27a86d8b19ee52c236d54748a
deleted: sha256:a08a8bf0b7692e7c6baf4863a9e4ed41bff3888664941b9a4b22a1021be0c747
deleted: sha256:8312e741ddd3b014adc5ef0247f575468bac4a2d5b79b19201bdc12677c4a134
deleted: sha256:ef873f28ff709993bf8ad5b9aaeee2e8aed63b5881019306b208bc709f5c9183
deleted: sha256:cc109c204958956f1875a1931ed4e617dfd1222dc345f186d37ecf8a24fdd8ca
deleted: sha256:42c5a3bb9bfc7a04a05c02d1820b8c105888d980c6ac1912c412cea6270c7544
deleted: sha256:e80f22ba85a443bff905bce5335e4823d178eec7660223b1f256f07e51f6112a
deleted: sha256:dd8be434374f3519284bbe5c2555668298ba08ef6a00de6a88cc4ac9a98edbd7
deleted: sha256:53ea6dadb13da691e5e6b9dbdb65f493e04262096a3a1479093bed834fcb57a8
deleted: sha256:2e41646694a111337529cbb96ab34d0e8d5a96469cb1047766b23d20463a2878
deleted: sha256:8c885ee21a13fa4042f6c2c1093f782a83a7c0b7b0708b39947879e938c9a5ff
deleted: sha256:367ff78b0c47630aa544caad70b7d10a281dc18970ae6d82b30d3c424240aa4b
deleted: sha256:c8cc0c3c1e09ecdb3f7e67503fa4617596e88e3aa7b4c37e730eec345e896910
deleted: sha256:f5d561f97a36fe7bff83cadaea255ab63f7c154f962668672be5cd179ea5dcc6
deleted: sha256:d8e7ac8cda7cda9380a290d868afe3da6f0a04684ca9f4e3ce5cba0809e51da8
deleted: sha256:c6d44935223de1422b7382856b8c97349d9cf86d888f79c4b26031e0ae677545
deleted: sha256:40d1e5c481f7edfa66d1f2c3cc847c9b3c0cee043a098719d52f3d6d9e9f688b
deleted: sha256:17328a8363397ac3a5646e00f3f329faee93fa66f7f74d273149f0983950ab9c
deleted: sha256:b2a41355dbff181a9fe5306519c273f303b43888b655bbf193c5e79d92476875
deleted: sha256:c734e700d4cdb4a263ccbae2e6c53d3f5b59c13f0f7935c7d9137f13ef17cafb
deleted: sha256:ca50c45e7cd137f24d0dfa9ea781e52e76644e07f36a15e67a95065ebf2e22a3

Total reclaimed space: 10.1GB 
[root@osestaging1 discourse]# 
  1. And now docker says I'm just using ~3G with 0% reclaimable, cool
[root@osestaging1 discourse]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              2                   1                   2.686GB             1.84kB (0%)
Containers          1                   1                   142.3MB             0B (0%)
Local Volumes       0                   0                   0B                  0B
Build Cache         0                   0                   0B                  0B
[root@osestaging1 discourse]# 
  1. be weary, though--even after this, there's still reports of disk creep breaking production instances https://meta.discourse.org/t/disk-space-usage-of-lib-docker-is-high/67375/9
  2. the docker container list now looks perfect, and the image list look sane--except that old hello-world image that we don't use anymore is present
[root@osestaging1 discourse]# docker ps -a
CONTAINER ID        IMAGE                           COMMAND             CREATED             STATUS              PORTS               NAMES
7987b80223d8        local_discourse/discourse_ose   "/sbin/boot"        2 months ago        Up 3 weeks                              discourse_ose
[root@osestaging1 discourse]# docker image ls
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 discourse]# 
  1. strangely, I don't see any docker volumes *at all*
[root@osestaging1 discourse]# docker volume ls
DRIVER              VOLUME NAME
[root@osestaging1 discourse]# docker volume ls --filter dangling=true
DRIVER              VOLUME NAME
[root@osestaging1 discourse]# docker volume ls --filter dangling=flase
Error response from daemon: Invalid filter 'dangling=[flase]'
[root@osestaging1 discourse]# docker volume ls --filter dangling=false
DRIVER              VOLUME NAME
[root@osestaging1 discourse]# 
  1. even though our existing Discourse container *does* have a mount https://stackoverflow.com/questions/30133664/how-do-you-list-volumes-in-docker-containers
[root@osestaging1 discourse]# docker ps
CONTAINER ID        IMAGE                           COMMAND             CREATED             STATUS              PORTS               NAMES
7987b80223d8        local_discourse/discourse_ose   "/sbin/boot"        2 months ago        Up 3 weeks                              discourse_ose
[root@osestaging1 discourse]# docker inspect -f '.Mounts' 7987b80223d8
[{bind  /var/discourse/shared/standalone /shared   true rprivate} {bind  /var/discourse/shared/standalone/log/var-log /var/log   true rprivate}]
[root@osestaging1 discourse]# 
  1. but, yeah, apparently the Volumes are null for that container. Not sure what the difference is between a docker volume and a mount on the local disk
[root@osestaging1 discourse]# docker inspect discourse_ose | grep -C5 Volume
			"RestartPolicy": {
				"Name": "always",
				"MaximumRetryCount": 0
			},
			"AutoRemove": false,
			"VolumeDriver": "",
			"VolumesFrom": null,
			"CapAdd": [
				"NET_ADMIN"
			],
			"CapDrop": null,
			"Capabilities": null,
--
			],
			"Cmd": [
				"/sbin/boot"
			],
			"Image": "local_discourse/discourse_ose",
			"Volumes": null,
			"WorkingDir": "",
			"Entrypoint": null,
			"MacAddress": "02:fc:97:b8:b4:0d",
			"OnBuild": null,
			"Labels": {}
[root@osestaging1 discourse]# 
  1. the biggest issue with that `./launcher cleanup` script is that it requires human interaction; we need something that can be cron-ified *and* is safe. Let's see what that script did in the backend. looks like it's cleaning up all containers that have been unused for the past 1 hour. Same with images.
 "$command" == "cleanup" ] && {
  $docker_path container prune --filter until=1h
  $docker_path image prune --all --filter until=1h
    1. Then it has some logic for cleaning up postgres backup data, but I don't think we need this as I already plan to cleanup backup data as part of our backup script procedure. https://wiki.opensourceecology.org/wiki/Discourse#Backups
  if [ -d /var/discourse/shared/standalone/postgres_data_old ]; then
	echo
	echo "Old PostgreSQL backup data cluster detected taking up $(du -hs /var/discourse/shared/standalone/postgres_data_old | awk '{print $1}') detected"
	read -p "Would you like to remove it? (Y/n): " -n 1 -r && echo

	if  $REPLY =~ ^[Yy]$ ; then
	  echo "removing old PostgreSQL data cluster at /var/discourse/shared/standalone/postgres_data_old..."
	  rm -rf /var/discourse/shared/standalone/postgres_data_old
	else
	  exit 1
	fi
  fi

  exit 0
}
    1. actually, my cleanup only deals with the backup data in '/var/discourse/shared/standalone/backups/default/'. There are 3x postgres dirs, but not a 'postgres_data_old' dir
[root@osestaging1 ~]# ls -lah /var/discourse/shared/standalone/ | grep -i postgres
drwxr-xr-x.  2       106  110 4.0K Nov  7 11:28 postgres_backup
drwx------. 19       106  110 4.0K Feb 16 08:50 postgres_data
drwxrwxr-x.  3       106  110 4.0K Feb 16 08:50 postgres_run
[root@osestaging1 ~]# ls /var/discourse/shared/standalone/postgres_backup
[root@osestaging1 ~]# ls /var/discourse/shared/standalone/postgres_data
base          pg_dynshmem    pg_logical    pg_replslot   pg_stat      pg_tblspc    pg_wal                postgresql.conf
global        pg_hba.conf    pg_multixact  pg_serial     pg_stat_tmp  pg_twophase  pg_xact               postmaster.opts
pg_commit_ts  pg_ident.conf  pg_notify     pg_snapshots  pg_subtrans  PG_VERSION   postgresql.auto.conf  postmaster.pid
[root@osestaging1 ~]# ls /var/discourse/shared/standalone/postgres_run
10-main.pg_stat_tmp  10-main.pid
[root@osestaging1 ~]# 
  1. this guide has a recommendation on specifying the Go Parse Duration string used in the '--filter until' string of the docker X prune commands https://rzymek.github.io/post/docker-prune/
    1. https://golang.org/pkg/time/#ParseDuration
    2. for 1 month, I'll use 24*7*4 = 672 hours = "672h"
  2. so far I'm thinking of using this in the cron. The first 2x commands might actually be redundant as they're done by the last command, but I'll keep them for good measure. The 'force' seems dangerous, but in this case it just means '--no-interactive'. The '--all' passed to the image and system prune makes it remove all unused images, not just dangling (images that are unused *and* not attached to a container) ones
# automatically clean unused container, images, and volumes that are >= 4 weeks old
$DOCKER container prune --force --filter until=672h
$DOCKER image prune --force --all --filter until=672h
$DOCKER system prune --force --all --volumes --filter until=672h
  1. I did a test and confirmed that I *still* am not deleting the 'hello-world' image, which I would expect to be both unused and dangling.
[root@osestaging1 discourse]# docker image ls
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 discourse]# docker image prune --filter until=672h
WARNING! This will remove all dangling images.
Are you sure you want to continue? [y/N] ^C
[root@osestaging1 discourse]# docker image prune --force --filter until=672h
Total reclaimed space: 0B
[root@osestaging1 discourse]# docker image ls
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 discourse]# docker image prune --force --all --filter until=672h
Total reclaimed space: 0B
[root@osestaging1 discourse]# docker image ls
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 discourse]# docker system prune --force --all --filter until=672h
Total reclaimed space: 0B
[root@osestaging1 discourse]# docker image ls
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 discourse]# 
  1. holy crap, it looks like there's a *lot* of unused images here.
[root@osestaging1 discourse]# docker image ls -a
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
<none>                          <none>              f360219e7107        2 months ago        2.36GB
<none>                          <none>              e28a1b9bed3c        2 months ago        1.45GB
<none>                          <none>              0ff8240737f4        2 months ago        1.45GB
<none>                          <none>              684943c6c5e6        2 months ago        1.45GB
<none>                          <none>              4ed7c9b093d2        2 months ago        1.45GB
<none>                          <none>              94e06b1f3f41        2 months ago        1.45GB
<none>                          <none>              d2a1e6a92486        2 months ago        1.45GB
<none>                          <none>              788fb8206122        2 months ago        1.45GB
<none>                          <none>              a71f844b67d0        2 months ago        1.45GB
<none>                          <none>              fb9c58fa1e78        2 months ago        1.45GB
<none>                          <none>              47ebd71a2345        2 months ago        1.45GB
<none>                          <none>              d3213021dfeb        2 months ago        1.45GB
<none>                          <none>              07427db10040        2 months ago        1.45GB
<none>                          <none>              aba70f3b8b42        2 months ago        1.45GB
<none>                          <none>              e42fd85e1a37        2 months ago        1.45GB
<none>                          <none>              ed5fd8bf05b2        2 months ago        1.45GB
<none>                          <none>              a7c356313c3d        2 months ago        1.45GB
<none>                          <none>              7d4f88daefaa        2 months ago        1.45GB
<none>                          <none>              de600a60df6a        2 months ago        1.45GB
<none>                          <none>              ebc6ac0e0dff        2 months ago        1.45GB
<none>                          <none>              6b54a6113eae        2 months ago        1.45GB
<none>                          <none>              b232c958fc8b        2 months ago        1.45GB
<none>                          <none>              45e278eb8c69        2 months ago        1.45GB
<none>                          <none>              ac40010cd131        2 months ago        1.36GB
<none>                          <none>              92f7b48e1daa        2 months ago        1.36GB
<none>                          <none>              a1217869717b        2 months ago        1.31GB
<none>                          <none>              07c9826057c6        2 months ago        1.31GB
<none>                          <none>              8e51906d7bb7        2 months ago        1.31GB
<none>                          <none>              b27a935114a9        2 months ago        1.29GB
<none>                          <none>              4e2e5d2418dd        2 months ago        1.24GB
<none>                          <none>              6e768349f962        2 months ago        1.19GB
<none>                          <none>              6656d1b2a23e        2 months ago        1.16GB
<none>                          <none>              7adb1a416692        2 months ago        1.15GB
<none>                          <none>              d2acf13fafd5        2 months ago        1.04GB
<none>                          <none>              7b5bd38d3844        2 months ago        1.04GB
<none>                          <none>              59a76ac6a098        2 months ago        947MB
<none>                          <none>              3882993549d4        2 months ago        939MB
<none>                          <none>              4a103641e5e9        2 months ago        939MB
<none>                          <none>              511719c6eefb        2 months ago        939MB
<none>                          <none>              0f9f59bbf84d        2 months ago        939MB
<none>                          <none>              e03f19ed9fa5        2 months ago        939MB
<none>                          <none>              bea1da85304a        2 months ago        456MB
<none>                          <none>              cacf1c536892        2 months ago        456MB
<none>                          <none>              42805e85562d        2 months ago        455MB
<none>                          <none>              8e024f4a7965        2 months ago        455MB
<none>                          <none>              625b1d94a607        2 months ago        455MB
<none>                          <none>              a976f80ec956        2 months ago        455MB
<none>                          <none>              e572fac5a9f8        2 months ago        455MB
<none>                          <none>              6f109fdfc265        2 months ago        455MB
<none>                          <none>              f4bd7ef985d9        2 months ago        216MB
<none>                          <none>              4bdd08c7cefe        2 months ago        216MB
<none>                          <none>              c7f86a29f64d        2 months ago        216MB
<none>                          <none>              174ee96e93cf        2 months ago        111MB
<none>                          <none>              91c3a6f41afa        2 months ago        111MB
<none>                          <none>              beb58a265975        2 months ago        109MB
<none>                          <none>              2c3d5dee9c4b        2 months ago        107MB
<none>                          <none>              77b175b0ff48        2 months ago        69.2MB
<none>                          <none>              4e0ebe3978c8        2 months ago        69.2MB
<none>                          <none>              d5a575075cbe        2 months ago        69.2MB
<none>                          <none>              365b514f867d        2 months ago        69.2MB
<none>                          <none>              2dbffcb4f093        3 months ago        69.2MB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 discourse]# 
  1. so my best guess is that all those images are *not* unused or dangling because they're layers underneath the other images somehow. But how do I see their relationship to something else to confirm this??
  2. looks like you can get an image associated with a container, then inspect that image to get a list of its "Layers"
[root@osestaging1 discourse]# docker container inspect discourse_ose | grep Image
		"Image": "sha256:fadcf78c67778efbbd2824592e6a45f704f1b2b0bbdd81b19982d9cab1a9325a",
			"Image": "local_discourse/discourse_ose",
[root@osestaging1 discourse]# 
[root@osestaging1 discourse]# docker image inspect fadcf78c67778efbbd2824592e6a45f704f1b2b0bbdd81b19982d9cab1a9325a | grep -E 'sha256|Layer'
		"Id": "sha256:fadcf78c67778efbbd2824592e6a45f704f1b2b0bbdd81b19982d9cab1a9325a",
		"Parent": "sha256:f360219e7107cf92f852c028d840c0d73406e581128f9386bd7c86dd68a975ea",
			"Layers": [
				"sha256:831c5620387fb9efec59fc82a42b948546c6be601e3ab34a87108ecf852aa15f",
				"sha256:43e7ac7e75df33eeeca46861ea7f46f730095cdd936e0083097693012bc04d3b",
				"sha256:7dac34941766cb3881a8d64127716ae77f8177687a15eb862d3222a4a58d1602",
				"sha256:a5de6851c90ff09bfe62e5b61cbbb2cd11e89a72bba81be70630ce681751e90d",
				"sha256:866c49e6b5219d769872b4a464ac3478d4ef657d977da0b40967b5427b326932",
				"sha256:64ba1e79b643f94a1b8367da3919ca5d82939548a1b1e25e7becc262e37cf575",
				"sha256:751109f46aa4ac84204c2996403e9624fa21f96809b7ae6468cd6c6aa7de37b9",
				"sha256:e5033220f5c666c3e99c03cad534f7ac0ab78fa137ab9eab5a4a07c303cad397",
				"sha256:0665f1baef759f9133b650633e55b3866a752552bdec9061d46c2f8ef8c33010",
				"sha256:5ecac91fdee9817bdd29754a6758ca11ac1e1bbf792c79a4683c3a57cac4c492",
				"sha256:602d0d9eba1419bf43204ae81a011f1711372ebd8f871ca79741c31fbe0e4396",
				"sha256:32d8c8434568f6f2edaf7c48c1236e272e7b7e643d0c5afab27ccd391664f399",
				"sha256:9f8ac2835a59e530e7c1eeb7071508fe93ec82900713dfd1650eaee71bb6aa9a",
				"sha256:6181f098ba8fbc6908806c081f0ca1225b06af12a79af9790d41c2902f6657bd",
				"sha256:09586d9d7b1e947ec5a845e599f5e96c6fee6e0efdbf4b2251972af741eabee5",
				"sha256:60f8aa5c0901e9d527e572f83825db367d70c017895f67a91ca25fdc6d598478",
				"sha256:a5517edf05d46c21d06c3a465ff7a77e31f82f4d1addc49c0f410d17ae3445b3",
				"sha256:b27d505d919d7ff5d06ff5e1299ac94660e6baf98ac8ea96dba81014f82f6dd9",
				"sha256:2c612aa74dcd32ba1bcbe3f455b915b68ec91177e8faed4bbe47420ca387a3ee",
				"sha256:febaa318222e27b6d8c0570c13cf077905eac488bf482a3b036cb78cffaade21",
				"sha256:3ebf68409c4d6ec224b14e35e583caa02c6e7e737e584bf6bc392b9b99438e20",
				"sha256:525dc0a31383d777986eb4c39bd917cbc07b284a0831a4c738ab77eee3c055c0",
				"sha256:18331a4c1dc26c5a28fffbcb8a53fa0689a3b353d822a6e3d80cce98ad2350d7",
				"sha256:d80179391d8560a148745fdaddc79daa8945b38f0c40b3a74092d72f1a4ac60b",
				"sha256:9baf0a67dbdc4e5d26b5c3d4082ebae9774ee2b949d3d460bb415d711b3525a8",
				"sha256:183a38d8a074a7802878fc5d715b8929a7da01653f1e85f1e712abcbf6b02a67",
				"sha256:0d853ab86cda1ea910ae7951dc52e415065a60d4c643472d8ca6d28be61be5e3",
				"sha256:cd6d0e10cf16bf1d349374acc21b3abc7bfb6b7c933593bdb82c669358256e1d",
				"sha256:692e6808ffd40a33b484d8f55a55b377d99522e5ecd756dcd04ca9456cfc0395",
				"sha256:eb882b253be92bc36b878477eddc4453b99d864025c3943defaa7415abd65394",
				"sha256:ae89ee1ac73a61854e4f03261077e66cf41965ae5a3e8583eb6191204d46c55d",
				"sha256:f7a113e983faa9b1ecb70f5bfe61c40cd4a7b0fa96c0d803e43b20fb2bfc2985",
				"sha256:74cef2501632dae32668131dfae184e9a95013b5f612aa4a72470923aa796ccf",
				"sha256:19aaaae705ebe30e899257597cea17757a510ae379467f3dbd8e70b6f7e83080",
				"sha256:e403e009d87bc327318a3cf36b5208f763b4fe1198cda742e90546ec8c8a129e",
				"sha256:513524a3e665dc7041b74b15fe5461d441e644fc53178fee22ddc191f2552d96",
				"sha256:017cc867124ada74385ecc9bfe21c46c0eb7e3c913efad80a0e09e33f4275b34",
				"sha256:c5f8292db67b38a7ef73fbe90e63057987dd5027c28730db68c1dc7dbf62c10f",
				"sha256:d51ae9cabfdd87d3fe3b509e5aae1adea434b66c571982b5cf68594d856bf8fa",
				"sha256:6a551e4457e17b2b083f089c4b9d5c488fa1155e564df1df5914b92fc476281e",
				"sha256:ec93d7e476b0986d7169628b187b54d6bd0ce7b7f3d3d7fe8c92c6408d238102",
				"sha256:7abbb02a08471bdcdbc9205ca6ebb30a2a7aeb9d54c2147dffbc3a66fae2be8d",
				"sha256:d69380fc40e83c6aefef68233ec191dc189ab5a4a4e5655228c6e7eeb31c2253",
				"sha256:4f7176227617d44bf81c2ae46378c0204d1bf13973265d7ca004833eda614315",
				"sha256:5b2e7ae9de250710d6342258c65ea4146fdcbba6377bc72e67794cd6b329ef82",
				"sha256:9f7c4d9ca95ab331fcd713609787a5c3fd2f2d373132e02318a51d14f4a86172",
				"sha256:e8e763edb56a3be93b4e13e25185dc9799140c086a23a13fb136bbf12658e535",
				"sha256:f571b4432af7641984385fc343caf652ee5e0125b5c154200d4ba9bc0df63141",
				"sha256:104abbeea88c1805ff067647965597a08ad6f99aa5a8cf940eb1598bf878e9bc",
				"sha256:c434976cf9594a79dda8791018bf39a5e7d66af4b99b52e837d75a38862c1b45",
				"sha256:c87575bef25bfcee00e2df91cffc543d5e14dc4bbaf015ea6574a9dcab404a19",
				"sha256:6bfbbc16d5db2e3c27e1dddb93d9b3014639eae1d041f6f1d446d8b4bdf82901",
				"sha256:cd708eb5b3f7e392d40f446406a3dc71a1f6e3bfa2fe889358c9a1ab6c221c47",
				"sha256:5da88e8f02708735921fef95b21768bf3d9eadc88d26788729d0f5f513ba89d7"
[root@osestaging1 discourse]# 
  1. ok, so excluding the 'discourse_ose' and 'hello-world' images (and the header text), we have a total of 61 images
[root@osestaging1 discourse]# docker image ls -a --no-trunc | wc -l
64
[root@osestaging1 discourse]# 
  1. and querying that discourse_ose container's image, we see a reference to 55 images (excluding itself). That's 1 parent image and 54 base images. And a total of 56 images related to 'discourse_ose'
[root@osestaging1 discourse]# docker image inspect fadcf78c67778efbbd2824592e6a45f704f1b2b0bbdd81b19982d9cab1a9325a | grep -i sha256 | sort | uniq | wc -l
56
[root@osestaging1 discourse]# docker image inspect fadcf78c67778efbbd2824592e6a45f704f1b2b0bbdd81b19982d9cab1a9325a | grep -i sha256 | sort | uniq
		"Id": "sha256:fadcf78c67778efbbd2824592e6a45f704f1b2b0bbdd81b19982d9cab1a9325a",
		"Parent": "sha256:f360219e7107cf92f852c028d840c0d73406e581128f9386bd7c86dd68a975ea",
				"sha256:017cc867124ada74385ecc9bfe21c46c0eb7e3c913efad80a0e09e33f4275b34",
				"sha256:0665f1baef759f9133b650633e55b3866a752552bdec9061d46c2f8ef8c33010",
				"sha256:09586d9d7b1e947ec5a845e599f5e96c6fee6e0efdbf4b2251972af741eabee5",
				"sha256:0d853ab86cda1ea910ae7951dc52e415065a60d4c643472d8ca6d28be61be5e3",
				"sha256:104abbeea88c1805ff067647965597a08ad6f99aa5a8cf940eb1598bf878e9bc",
				"sha256:18331a4c1dc26c5a28fffbcb8a53fa0689a3b353d822a6e3d80cce98ad2350d7",
				"sha256:183a38d8a074a7802878fc5d715b8929a7da01653f1e85f1e712abcbf6b02a67",
				"sha256:19aaaae705ebe30e899257597cea17757a510ae379467f3dbd8e70b6f7e83080",
				"sha256:2c612aa74dcd32ba1bcbe3f455b915b68ec91177e8faed4bbe47420ca387a3ee",
				"sha256:32d8c8434568f6f2edaf7c48c1236e272e7b7e643d0c5afab27ccd391664f399",
				"sha256:3ebf68409c4d6ec224b14e35e583caa02c6e7e737e584bf6bc392b9b99438e20",
				"sha256:43e7ac7e75df33eeeca46861ea7f46f730095cdd936e0083097693012bc04d3b",
				"sha256:4f7176227617d44bf81c2ae46378c0204d1bf13973265d7ca004833eda614315",
				"sha256:513524a3e665dc7041b74b15fe5461d441e644fc53178fee22ddc191f2552d96",
				"sha256:525dc0a31383d777986eb4c39bd917cbc07b284a0831a4c738ab77eee3c055c0",
				"sha256:5b2e7ae9de250710d6342258c65ea4146fdcbba6377bc72e67794cd6b329ef82",
				"sha256:5da88e8f02708735921fef95b21768bf3d9eadc88d26788729d0f5f513ba89d7"
				"sha256:5ecac91fdee9817bdd29754a6758ca11ac1e1bbf792c79a4683c3a57cac4c492",
				"sha256:602d0d9eba1419bf43204ae81a011f1711372ebd8f871ca79741c31fbe0e4396",
				"sha256:60f8aa5c0901e9d527e572f83825db367d70c017895f67a91ca25fdc6d598478",
				"sha256:6181f098ba8fbc6908806c081f0ca1225b06af12a79af9790d41c2902f6657bd",
				"sha256:64ba1e79b643f94a1b8367da3919ca5d82939548a1b1e25e7becc262e37cf575",
				"sha256:692e6808ffd40a33b484d8f55a55b377d99522e5ecd756dcd04ca9456cfc0395",
				"sha256:6a551e4457e17b2b083f089c4b9d5c488fa1155e564df1df5914b92fc476281e",
				"sha256:6bfbbc16d5db2e3c27e1dddb93d9b3014639eae1d041f6f1d446d8b4bdf82901",
				"sha256:74cef2501632dae32668131dfae184e9a95013b5f612aa4a72470923aa796ccf",
				"sha256:751109f46aa4ac84204c2996403e9624fa21f96809b7ae6468cd6c6aa7de37b9",
				"sha256:7abbb02a08471bdcdbc9205ca6ebb30a2a7aeb9d54c2147dffbc3a66fae2be8d",
				"sha256:7dac34941766cb3881a8d64127716ae77f8177687a15eb862d3222a4a58d1602",
				"sha256:831c5620387fb9efec59fc82a42b948546c6be601e3ab34a87108ecf852aa15f",
				"sha256:866c49e6b5219d769872b4a464ac3478d4ef657d977da0b40967b5427b326932",
				"sha256:9baf0a67dbdc4e5d26b5c3d4082ebae9774ee2b949d3d460bb415d711b3525a8",
				"sha256:9f7c4d9ca95ab331fcd713609787a5c3fd2f2d373132e02318a51d14f4a86172",
				"sha256:9f8ac2835a59e530e7c1eeb7071508fe93ec82900713dfd1650eaee71bb6aa9a",
				"sha256:a5517edf05d46c21d06c3a465ff7a77e31f82f4d1addc49c0f410d17ae3445b3",
				"sha256:a5de6851c90ff09bfe62e5b61cbbb2cd11e89a72bba81be70630ce681751e90d",
				"sha256:ae89ee1ac73a61854e4f03261077e66cf41965ae5a3e8583eb6191204d46c55d",
				"sha256:b27d505d919d7ff5d06ff5e1299ac94660e6baf98ac8ea96dba81014f82f6dd9",
				"sha256:c434976cf9594a79dda8791018bf39a5e7d66af4b99b52e837d75a38862c1b45",
				"sha256:c5f8292db67b38a7ef73fbe90e63057987dd5027c28730db68c1dc7dbf62c10f",
				"sha256:c87575bef25bfcee00e2df91cffc543d5e14dc4bbaf015ea6574a9dcab404a19",
				"sha256:cd6d0e10cf16bf1d349374acc21b3abc7bfb6b7c933593bdb82c669358256e1d",
				"sha256:cd708eb5b3f7e392d40f446406a3dc71a1f6e3bfa2fe889358c9a1ab6c221c47",
				"sha256:d51ae9cabfdd87d3fe3b509e5aae1adea434b66c571982b5cf68594d856bf8fa",
				"sha256:d69380fc40e83c6aefef68233ec191dc189ab5a4a4e5655228c6e7eeb31c2253",
				"sha256:d80179391d8560a148745fdaddc79daa8945b38f0c40b3a74092d72f1a4ac60b",
				"sha256:e403e009d87bc327318a3cf36b5208f763b4fe1198cda742e90546ec8c8a129e",
				"sha256:e5033220f5c666c3e99c03cad534f7ac0ab78fa137ab9eab5a4a07c303cad397",
				"sha256:e8e763edb56a3be93b4e13e25185dc9799140c086a23a13fb136bbf12658e535",
				"sha256:eb882b253be92bc36b878477eddc4453b99d864025c3943defaa7415abd65394",
				"sha256:ec93d7e476b0986d7169628b187b54d6bd0ce7b7f3d3d7fe8c92c6408d238102",
				"sha256:f571b4432af7641984385fc343caf652ee5e0125b5c154200d4ba9bc0df63141",
				"sha256:f7a113e983faa9b1ecb70f5bfe61c40cd4a7b0fa96c0d803e43b20fb2bfc2985",
				"sha256:febaa318222e27b6d8c0570c13cf077905eac488bf482a3b036cb78cffaade21",
[root@osestaging1 discourse]# 
  1. the hello-world image inspection lists another 4x unique images, including itself, a "RepoDigest", "ContainerConfig', "Config" (which has the same value as "ContainerConfig", and 1x Layer. Adding to the discourse_ose list, that's now 60x images. Still 4x missing!
[root@osestaging1 discourse]# docker image inspect fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e | grep sha256 | sort | uniq | wc -l
4
[root@osestaging1 discourse]# docker image inspect fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e | grep sha256 | sort | uniq
			"hello-world@sha256:c3b4ada4687bbaa170745b3e4dd8ac3f194ca95b2d0518b417fb47e5879d9b5f"
		"Id": "sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e",
			"Image": "sha256:a6d1aaad8ca65655449a26146699fe9d61240071f6992975be7e720f1cd42440",
				"sha256:af0b15c8625bb1938f1d7b17081031f649fd14e6b233688eea3c5483994a66a3"
[root@osestaging1 discourse]# 
  1. I did a diff to isolate these remaining images and see what they are
[root@osestaging1 ~]# cd /var/tmp/
[root@osestaging1 tmp]# mkdir dockerImageDiff.20200308
[root@osestaging1 tmp]# cd dockerImageDiff.20200308/
[root@osestaging1 dockerImageDiff.20200308]# 
[root@osestaging1 dockerImageDiff.20200308]# docker image ls -a --no-trunc | tail -n+2 | awk '{print $3}' | sort | uniq &> allImages.txt
[root@osestaging1 dockerImageDiff.20200308]# wc -l allImages.txt 
63 allImages.txt
[root@osestaging1 dockerImageDiff.20200308]# head -n1 allImages.txt 
sha256:07427db10040897883d436fbaf9a59b8ccc7e54079c3ce8fa10acfcf10bec534
[root@osestaging1 dockerImageDiff.20200308]# tail -n1 allImages.txt 
sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e
[root@osestaging1 dockerImageDiff.20200308]# 
[root@osestaging1 dockerImageDiff.20200308]# wc -l discourse_oseImages.txt 
56 discourse_oseImages.txt
[root@osestaging1 dockerImageDiff.20200308]# head -n1 discourse_oseImages.txt 
sha256:017cc867124ada74385ecc9bfe21c46c0eb7e3c913efad80a0e09e33f4275b34
[root@osestaging1 dockerImageDiff.20200308]# tail -n1 discourse_oseImages.txt 
sha256:febaa318222e27b6d8c0570c13cf077905eac488bf482a3b036cb78cffaade21
[root@osestaging1 dockerImageDiff.20200308]# 
  1. holy shit, there's actually less common lines in those files than different lines! Only 2x of our discourse container's image references are in our list of all the images on the system. That doesn't make sense though...
[root@osestaging1 dockerImageDiff.20200308]# diff --suppress-common-lines allImages.txt discourse_oseImages.txt | wc -l
121
[root@osestaging1 dockerImageDiff.20200308]# 
  1. grabbing a random image from the list of base images from our discourse_ose image, yeah, it's not an image...
[root@osestaging1 dockerImageDiff.20200308]# docker image inspect d69380fc40e83c6aefef68233ec191dc189ab5a4a4e5655228c6e7eeb31c2253
[]
Error: No such image: d69380fc40e83c6aefef68233ec191dc189ab5a4a4e5655228c6e7eeb31c2253
[root@osestaging1 dockerImageDiff.20200308]# 
  1. whatever, I think what we have is good enough.
  2. Also, it looks like you can't actually use the 'until' filter to clean-out volumes with `docker system prune`, so I just deleted it to err on the side of not deleting data that we need during a (failed) upgrade.
  3. I documented how to install this script, its cron, and setup the logging dir on the wiki
  4. ...
  5. I tried to access the discourse site on staging, but I got an error "Unable to connect"
Firefox can’t establish a connection to the server at discourse.opensourceecology.org.
  1. Again, I confirmed that the discourse container is running in docker
[root@osestaging1 discourse]# docker ps
CONTAINER ID        IMAGE                           COMMAND             CREATED             STATUS              PORTS               NAMES
7987b80223d8        local_discourse/discourse_ose   "/sbin/boot"        2 months ago        Up 3 weeks                              discourse_ose
[root@osestaging1 discourse]# 
  1. I tried to enter the container to see if nginx was running, but I hit an issue with docker *facepalm*
[root@osestaging1 discourse]# /var/discourse/launcher enter discourse_ose
Unable to find image 'discourse_ose:latest' locally
/bin/docker: Error response from daemon: pull access denied for discourse_ose, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See '/bin/docker run --help'.
Your Docker installation is not working correctly

See: https://meta.discourse.org/t/docker-error-on-bootstrap/13657/18?u=sam
[root@osestaging1 discourse]#
  1. I tried restarting the docker daemon; it didn't help
[root@osestaging1 discourse]# systemctl docker status
Unknown operation 'docker'.
[root@osestaging1 discourse]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-02-16 08:50:10 UTC; 3 weeks 0 days ago
	 Docs: https://docs.docker.com
 Main PID: 345 (dockerd)
   Memory: 148.5M
   CGroup: /user.slice/user-1000.slice/session-1.scope/system.slice/docker.service
		   └─345 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Mar 08 13:46:56 osestaging1 dockerd[345]: time="2020-03-08T13:46:56.671735261Z" level=warning msg="failed to prune image docker.io/library/hello-wor...d:latest"
Mar 08 13:46:56 osestaging1 dockerd[345]: time="2020-03-08T13:46:56.671892904Z" level=warning msg="failed to prune image docker.io/library/hello-wor...879d9b5f"
Mar 08 13:46:56 osestaging1 dockerd[345]: time="2020-03-08T13:46:56.804124574Z" level=warning msg="failed to prune image docker.io/library/hello-wor...d:latest"
Mar 08 13:46:56 osestaging1 dockerd[345]: time="2020-03-08T13:46:56.804218291Z" level=warning msg="failed to prune image docker.io/library/hello-wor...879d9b5f"
Mar 08 13:47:12 osestaging1 dockerd[345]: time="2020-03-08T13:47:12.805438962Z" level=warning msg="failed to prune image docker.io/library/hello-wor...d:latest"
Mar 08 13:47:12 osestaging1 dockerd[345]: time="2020-03-08T13:47:12.805506919Z" level=warning msg="failed to prune image docker.io/library/hello-wor...879d9b5f"
Mar 08 13:47:12 osestaging1 dockerd[345]: time="2020-03-08T13:47:12.908624912Z" level=warning msg="failed to prune image docker.io/library/hello-wor...d:latest"
Mar 08 13:47:12 osestaging1 dockerd[345]: time="2020-03-08T13:47:12.908681348Z" level=warning msg="failed to prune image docker.io/library/hello-wor...879d9b5f"
Mar 08 15:34:31 osestaging1 dockerd[345]: time="2020-03-08T15:34:31.852681882Z" level=error msg="Not continuing with pull after error: errors:\ndeni...quired\n"
Mar 08 15:34:31 osestaging1 dockerd[345]: time="2020-03-08T15:34:31.852831361Z" level=info msg="Ignoring extra error returned from registry: unautho...required"
Hint: Some lines were ellipsized, use -l to show in full.
[root@osestaging1 discourse]# systemctl restart docker
[root@osestaging1 discourse]# /var/discourse/launcher enter discourse_ose
Unable to find image 'discourse_ose:latest' locally
/bin/docker: Error response from daemon: pull access denied for discourse_ose, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See '/bin/docker run --help'.
Your Docker installation is not working correctly

See: https://meta.discourse.org/t/docker-error-on-bootstrap/13657/18?u=sam
[root@osestaging1 discourse]# 
  1. I did the hello world test as the topic linked above suggested, but that worked fine
[root@osestaging1 discourse]# docker run -it --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
Digest: sha256:fc6a51919cfeb2e6763f62b6d9e8815acbf7cd2e476ea353743570610737b752
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
	(amd64)
 3. The Docker daemon created a new container from that image which runs the
	executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
	to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

[root@osestaging1 discourse]# 
  1. I also confirmed that disk space isn't an issue
[root@osestaging1 discourse]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/ose_dev_volume_1  125G   70G   49G  59% /
devtmpfs                      873M     0  873M   0% /dev
tmpfs                         896M     0  896M   0% /dev/shm
tmpfs                         896M   97M  799M  11% /run
tmpfs                         896M     0  896M   0% /sys/fs/cgroup
tmpfs                         180M     0  180M   0% /run/user/991
tmpfs                         180M     0  180M   0% /run/user/0
tmpfs                         180M     0  180M   0% /run/user/1005
overlay                       125G   70G   49G  59% /var/lib/docker/overlay2/8faf856790ee1d6f1c21eef7c0b27a4e6193d8e10dca9170edaefc9d95ad8d5d/merged
[root@osestaging1 discourse]# 
  1. double-checking this error, I bet this is the issue
Unable to find image 'discourse_ose:latest' locally
  1. checking my output from above, there used to be two discourse_ose images (one named "discourse_ose" and the other named "local_discourse/discourse_ose". The former got deleted during the prune (actually from discoures's own `./launcher cleanup`), and I'm guessing that's the issue. Ugh, I don't know a great solution for this. Of course I can update my script, but 'm afraid that an admin may end up doing a prune later and breaking things.
  2. anyway, let's try the docker build from our install guide on the wiki again and see what happens
  3. first, here's the list of docker images
[root@osestaging1 discourse]# date
Sun Mar  8 15:51:00 UTC 2020
[root@osestaging1 discourse]# docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 discourse]# 
  1. And now we do the build
[root@osestaging1 discourse]# date && time docker build --tag 'discourse_ose' /var/discourse/image/base/
Sun Mar  8 15:51:20 UTC 2020
Sending build context to Docker daemon  61.44kB
...
Removing intermediate container 68ec7875f97c
 ---> 9087b8ab8a6b
Successfully built 9087b8ab8a6b
Successfully tagged discourse_ose:latest

real    41m46.775s
user    0m0.937s
sys     0m0.606s
[root@osestaging1 discourse]# 
  1. that whole process (42 minutes!) appears to have added two new images: a debian image and the 'discourse_ose' image
[root@osestaging1 discourse]# docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
discourse_ose                   latest              9087b8ab8a6b        7 minutes ago       2.3GB
debian                          buster-slim         2f14a0fb67b9        11 days ago         69.2MB
local_discourse/discourse_ose   latest              fadcf78c6777        2 months ago        2.69GB
hello-world                     latest              fce289e99eb9        14 months ago       1.84kB
[root@osestaging1 discourse]# 
  1. And now I can enter the discourse containter!
[root@osestaging1 discourse]# /var/discourse/launcher enter discourse_ose
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. I still can't access the site, though
user@ose:~$ cat /etc/resolv.conf
nameserver 10.241.189.1
user@ose:~$ dig +short discourse.opensourceecology.org
10.241.189.11
user@ose:~$ curl discourse.opensourceecology.org
curl: (7) Failed to connect to discourse.opensourceecology.org port 80: Connection refused
user@ose:~$ 
  1. it does look like nginx is running; let's try to restart it
root@osestaging1-discourse-ose:/var/www/discourse# sv status nginx
run: nginx: (pid 551) 3762s
root@osestaging1-discourse-ose:/var/www/discourse# sv stop nginx && sv start nginx
ok: down: nginx: 0s, normally up
ok: run: nginx: (pid 5343) 1s
root@osestaging1-discourse-ose:/var/www/discourse# 
  1. nope, still not listening. Oh wait, what about the nginx proxy running on the staging host *outside* the discourse container? It's stopped!
[root@osestaging1 cron.d]# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sun 2020-02-16 08:50:00 UTC; 3 weeks 0 days ago
  Process: 343 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
  Process: 332 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
[root@osestaging1 cron.d]# systemctl restart nginx
[root@osestaging1 cron.d]# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-03-08 16:49:16 UTC; 9s ago
  Process: 28502 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 28501 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 28500 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 28503 (nginx)
   Memory: 7.6M
   CGroup: /user.slice/user-1000.slice/session-1.scope/system.slice/nginx.service
		   ├─28503 nginx: master process /usr/sbin/nginx
		   └─28504 nginx: worker process

Mar 08 16:49:16 osestaging1 nginx[28502]: nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/...nclude:11
Mar 08 16:49:16 osestaging1 nginx[28502]: nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/...nclude:11
Mar 08 16:49:16 osestaging1 nginx[28502]: nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/...nclude:11
Mar 08 16:49:16 osestaging1 nginx[28502]: nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/...nclude:11
Mar 08 16:49:16 osestaging1 nginx[28502]: nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/...nclude:11
Mar 08 16:49:16 osestaging1 nginx[28502]: nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/...nclude:11
Mar 08 16:49:16 osestaging1 nginx[28502]: nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/...nclude:11
Mar 08 16:49:16 osestaging1 nginx[28502]: nginx: [warn] conflicting server name "_" on 10.241.189.11:443, ignored
Mar 08 16:49:16 osestaging1 systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
Mar 08 16:49:16 osestaging1 systemd[1]: Started The nginx HTTP and reverse proxy server.
Hint: Some lines were ellipsized, use -l to show in full.
[root@osestaging1 cron.d]# 
  1. Now I can connect! I added this hint to the troubleshooting section on the wiki https://wiki.opensourceecology.org/wiki/Discourse#Unable_to_Connect
  2. ok, so the next thing that I want to do is to make a backup, re-sync our production data down to the staging server (effectively clobbering the discourse install entirely) and then test a fresh install following my guide and a restore of its data from the backup.
  3. first, I make the backup. I'll keep it in '/root', since that's explicitly excluded in the rsync from prod to staging
[root@osestaging1 ~]# # SETTINGS
[root@osestaging1 ~]# backupDirPath=/root/discourseBackup
[root@osestaging1 ~]# archiveDirName=20200308
[root@osestaging1 ~]# 
[root@osestaging1 ~]# NICE='/bin/nice'
[root@osestaging1 ~]# TAR='/bin/tar'
[root@osestaging1 ~]# MKDIR='/bin/mkdir'
[root@osestaging1 ~]# RM='/bin/rm'
[root@osestaging1 ~]# MV='/bin/mv'
[root@osestaging1 ~]# DOCKER='/bin/docker'
[root@osestaging1 ~]# 
[root@osestaging1 ~]# 
[root@osestaging1 ~]# mkdir ${backupDirPath}
[root@osestaging1 ~]# mkdir "${backupDirPath}/${archiveDirName}"
[root@osestaging1 ~]# echo "${backupDirPath}/${archiveDirName}"
/root/discourseBackup/20200308
[root@osestaging1 ~]# ls -lah "${backupDirPath}/${archiveDirName}"
total 8.0K
drwxr-xr-x. 2 root root 4.0K Mar  8 17:09 .
drwxr-xr-x. 3 root root 4.0K Mar  8 17:09 ..
[root@osestaging1 ~]# 
[root@osestaging1 ~]# #############
[root@osestaging1 ~]# # DISCOURSE #
[root@osestaging1 ~]# #############
[root@osestaging1 ~]# 
[root@osestaging1 ~]# # cleanup old backups
[root@osestaging1 ~]# $NICE $RM -rf /var/discourse/shared/standalone/backups/default/*.tar.gz
[root@osestaging1 ~]# time $NICE $DOCKER exec discourse_ose discourse backup
...
Backup done.
Output file is in: /var/www/discourse/public/backups/default/discourse-2020-03-08-172140-v20191219112000.tar.gz


real    0m36.622s
user    0m0.053s
sys     0m0.076s
[root@osestaging1 ~]# $NICE $MV /var/discourse/shared/standalone/backups/default/*.tar.gz "${backupDirPath}/${archiveDirName}/discourse_ose/"
[root@osestaging1 ~]# 
[root@osestaging1 ~]# #########
[root@osestaging1 ~]# # FILES #
[root@osestaging1 ~]# #########
[root@osestaging1 ~]# 
[root@osestaging1 ~]# # /var/discourse
[root@osestaging1 ~]# echo -e "\tINFO: /var/discourse"
		INFO: /var/discourse
[root@osestaging1 ~]# $MKDIR "${backupDirPath}/${archiveDirName}/discourse_ose"
/bin/mkdir: cannot create directory ‘/root/discourseBackup/20200308/discourse_ose’: File exists
[root@osestaging1 ~]# time $NICE $TAR --exclude "/var/discourse/shared/standalone/postgres_data" --exclude "/var/discourse/shared/standalone/postgres_data/uploads" --exclude "/var/discourse/shared/standalone/backups" -czf ${backupDirPath}/${archiveDirName}/discourse_ose/discourse_ose.${stamp}.tar.gz /var/discourse/*
/bin/tar: Removing leading `/' from member names
/bin/tar: /var/discourse/shared/standalone/nginx.http.sock: socket ignored
/bin/tar: /var/discourse/shared/standalone/postgres_run/.s.PGSQL.5432: socket ignored

real    0m1.297s
user    0m0.677s
sys     0m0.124s
[root@osestaging1 ~]# 
  1. ok, that created two files, safely stored in our root dir
[root@osestaging1 ~]# ls -lah "${backupDirPath}/${archiveDirName}"
total 12K
drwxr-xr-x. 3 root root 4.0K Mar  8 17:14 .
drwxr-xr-x. 3 root root 4.0K Mar  8 17:09 ..
drwxr-xr-x. 2 root root 4.0K Mar  8 17:24 discourse_ose
[root@osestaging1 ~]# ls -lah "${backupDirPath}/${archiveDirName}/discourse_ose"
total 59M
drwxr-xr-x. 2 root      root      4.0K Mar  8 17:24 .
drwxr-xr-x. 3 root      root      4.0K Mar  8 17:14 ..
-rw-r--r--. 1 tgriffing tgriffing  56M Mar  8 17:21 discourse-2020-03-08-172140-v20191219112000.tar.gz
-rw-r--r--. 1 root      root      3.9M Mar  8 17:22 discourse_ose.20200308_17:04:50.tar.gz
[root@osestaging1 ~]# 
  1. it's actually redundant, but whatever. The first one is just the database sql dump and uploaded images (should be everything needed to restore the state of our discourse site after a fresh install) and the second has everything else in the /var/discourse dir, excluding the above backups. This might be useful to prevent the need to follow the install guide during a restore
[root@osestaging1 ~]# tar --list -f /root/discourseBackup/20200308/discourse_ose/discourse-2020-03-08-172140-v20191219112000.tar.gz 
dump.sql.gz
uploads/default/
uploads/default/original/
uploads/default/original/1X/
uploads/default/original/1X/e952cfd4c1bc58e77024e4c2b518531356319780.png
[root@osestaging1 ~]# 
[root@osestaging1 ~]# tar --list -f /root/discourseBackup/20200308/discourse_ose/discourse_ose.20200308_17\:04\:50.tar.gz | wc -l
418
[root@osestaging1 ~]# tar --list -f /root/discourseBackup/20200308/discourse_ose/discourse_ose.20200308_17\:04\:50.tar.gz | grep -E '/$'
var/discourse/bin/
var/discourse/cids/
var/discourse/containers/
var/discourse/image/
var/discourse/image/discourse_bench/
var/discourse/image/discourse_dev/
var/discourse/image/monitor/
var/discourse/image/monitor/src/
var/discourse/image/discourse_fast_switch/
var/discourse/image/discourse_test/
var/discourse/image/base/
var/discourse/libbrotli/
var/discourse/libbrotli/brotli/
var/discourse/libbrotli/tools/
var/discourse/libbrotli/.git/
var/discourse/libbrotli/.git/refs/
var/discourse/libbrotli/.git/refs/remotes/
var/discourse/libbrotli/.git/refs/remotes/origin/
var/discourse/libbrotli/.git/refs/tags/
var/discourse/libbrotli/.git/refs/heads/
var/discourse/libbrotli/.git/branches/
var/discourse/libbrotli/.git/info/
var/discourse/libbrotli/.git/hooks/
var/discourse/libbrotli/.git/logs/
var/discourse/libbrotli/.git/logs/refs/
var/discourse/libbrotli/.git/logs/refs/remotes/
var/discourse/libbrotli/.git/logs/refs/remotes/origin/
var/discourse/libbrotli/.git/logs/refs/heads/
var/discourse/libbrotli/.git/objects/
var/discourse/libbrotli/.git/objects/pack/
var/discourse/libbrotli/.git/objects/info/
var/discourse/samples/
var/discourse/scripts/
var/discourse/shared/
var/discourse/shared/standalone/
var/discourse/shared/standalone/tmp/
var/discourse/shared/standalone/tmp/backups/
var/discourse/shared/standalone/tmp/backups/default/
var/discourse/shared/standalone/tmp/restores/
var/discourse/shared/standalone/redis_data/
var/discourse/shared/standalone/state/
var/discourse/shared/standalone/state/logrotate/
var/discourse/shared/standalone/state/anacron-spool/
var/discourse/shared/standalone/uploads/
var/discourse/shared/standalone/uploads/default/
var/discourse/shared/standalone/uploads/default/original/
var/discourse/shared/standalone/uploads/default/original/1X/
var/discourse/shared/standalone/uploads/default/optimized/
var/discourse/shared/standalone/uploads/default/optimized/1X/
var/discourse/shared/standalone/uploads/tombstone/
var/discourse/shared/standalone/uploads/tombstone/default/
var/discourse/shared/standalone/uploads/tombstone/default/original/
var/discourse/shared/standalone/uploads/tombstone/default/original/1X/
var/discourse/shared/standalone/postgres_backup/
var/discourse/shared/standalone/postgres_run/
var/discourse/shared/standalone/postgres_run/10-main.pg_stat_tmp/
var/discourse/shared/standalone/log/
var/discourse/shared/standalone/log/rails/
var/discourse/shared/standalone/log/var-log/
var/discourse/shared/standalone/log/var-log/redis/
var/discourse/shared/standalone/log/var-log/postgres/
var/discourse/shared/standalone/log/var-log/apt/
var/discourse/shared/standalone/log/var-log/nginx/
var/discourse/shared/standalone/log/var-log/unattended-upgrades/
var/discourse/templates/
var/discourse/templates/import/
[root@osestaging1 ~]#
  1. ok, let's kick-off the sync now
  2. first, I discovered that prod couldn't ping staging; that's a problem
[maltfield@opensourceecology ~]$ ping -c5 10.241.189.11
PING 10.241.189.11 (10.241.189.11) 56(84) bytes of data.

--- 10.241.189.11 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4000ms

[maltfield@opensourceecology ~]$ 
  1. sure enough, the vpn service is failed
[root@opensourceecology ~]# systemctl status openvpn-client
● openvpn-client.service
   Loaded: loaded (/etc/systemd/system/openvpn-client.service; enabled; vendor preset: disabled)
   Active: failed (Result: timeout) since Mon 2019-12-16 17:07:16 UTC; 2 months 22 days ago
	 Docs: man:openvpn(8)
		   https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
		   https://community.openvpn.net/openvpn/wiki/HOWTO
  Process: 28911 ExecStart=/etc/openvpn/client/connect.sh (code=killed, signal=TERM)
 Main PID: 28911 (code=killed, signal=TERM)
   CGroup: /system.slice/openvpn-client.service

Jan 02 10:46:27 opensourceecology.org connect.sh[28911]: VERIFY EKU OK
Jan 02 10:46:27 opensourceecology.org connect.sh[28911]: VERIFY OK: depth=0, CN=server
Jan 02 10:46:27 opensourceecology.org connect.sh[28911]: Control Channel: TLSv1.2, cipher TLSv1/S...SA
Jan 02 10:46:27 opensourceecology.org connect.sh[28911]: [server] Peer Connection Initiated with ...94
Jan 02 10:46:28 opensourceecology.org connect.sh[28911]: SENT CONTROL [server]: 'PUSH_REQUEST' (s...1)
Jan 02 10:46:28 opensourceecology.org connect.sh[28911]: AUTH: Received control message: AUTH_FAILED
Jan 02 10:46:28 opensourceecology.org connect.sh[28911]: /sbin/ip route del 10.241.189.0/24
Jan 02 10:46:28 opensourceecology.org connect.sh[28911]: Closing TUN/TAP interface
Jan 02 10:46:28 opensourceecology.org connect.sh[28911]: /sbin/ip addr del dev tun0 local 10.241....55
Jan 02 10:46:28 opensourceecology.org connect.sh[28911]: SIGTERM[soft,auth-failure] received, pro...ng
Hint: Some lines were ellipsized, use -l to show in full.
[root@opensourceecology ~]# 
  1. I gave it a restart; the command didn't actually exit
[root@opensourceecology ~]# systemctl restart openvpn-client

  1. but the pings go through now *shrug*
[root@opensourceecology ~]# ping -c5 10.241.189.11
PING 10.241.189.11 (10.241.189.11) 56(84) bytes of data.
64 bytes from 10.241.189.11: icmp_seq=1 ttl=64 time=1.13 ms
64 bytes from 10.241.189.11: icmp_seq=2 ttl=64 time=0.785 ms
64 bytes from 10.241.189.11: icmp_seq=3 ttl=64 time=1.43 ms
64 bytes from 10.241.189.11: icmp_seq=4 ttl=64 time=1.06 ms
64 bytes from 10.241.189.11: icmp_seq=5 ttl=64 time=1.17 ms

--- 10.241.189.11 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.785/1.119/1.438/0.211 ms
[root@opensourceecology ~]# 
  1. and I kicked-off the sync. It only took like 17 minutes to sync several months of delta. Cool!
[root@opensourceecology ~]# time nice /root/bin/syncToStaging.sh &> /var/log/syncToStaging.log

real    16m19.048s
user    2m55.992s
sys     0m28.913s
[root@opensourceecology ~]# 

  1. Unfortunately, it appears to have been non-destructive on the destination
[root@osestaging1 ~]# ls /var/discourse/
bin               discourse-setup  image       launcher.20191118_122249  launcher.20191217_104906  launcher.old  README.md  templates
cids              docker-ce.repo   index.html  launcher.20191118.orig    launcher.20191228_143104  libbrotli     samples    Vagrantfile
containers        docker.gpg       install.sh  launcher.20191217         launcher.20191228_144202  LICENSE       scripts
discourse-doctor  get-docker.sh    launcher    launcher.20191217_074503  launcher.new              output.log    shared
[root@osestaging1 ~]# 
  1. I updated the script to include the '--delete' argument. Note that this will *not* delete files on the destination from dirs which were excluded with '--exclude' (by default)
  2. This (destructive) double-tap just took 5 minutes. Svveet!
[root@opensourceecology ~]# time nice /root/bin/syncToStaging.sh &> /var/log/syncToStaging.log

real    5m4.192s
user    0m39.333s
sys     0m22.647s
[root@opensourceecology ~]# 
  1. ok, that worked!
[root@osestaging1 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/ose_dev_volume_1  125G   68G   52G  57% /
devtmpfs                      873M     0  873M   0% /dev
tmpfs                         896M     0  896M   0% /dev/shm
tmpfs                         896M   97M  799M  11% /run
tmpfs                         896M     0  896M   0% /sys/fs/cgroup
tmpfs                         180M     0  180M   0% /run/user/991
tmpfs                         180M     0  180M   0% /run/user/0
tmpfs                         180M     0  180M   0% /run/user/1005
overlay                       125G   68G   52G  57% /var/lib/docker/overlay2/8faf856790ee1d6f1c21eef7c0b27a4e6193d8e10dca9170edaefc9d95ad8d5d/merged
[root@osestaging1 ~]# ls /var/discourse
ls: cannot access /var/discourse: No such file or directory
[root@osestaging1 ~]# ls /var
adm  cache  crash  db  empty  games  gopher  kerberos  lib  local  lock  log  mail  nis  opt  ossec  preserve  run  spool  tmp  webmin  www  yp
[root@osestaging1 ~]# 
  1. I hit 'opensourceecology.org/is_staging' in the browser and confirmed it's working and 'true'. I checked the cert, and it's been updated. Looks good.
  2. 'https://discourse.opensourceecology.org/' now redirects to the fef site...probably defaulting out for some reason. Anyway, it's totally gone. Let's reinstall.
  3. docker still hasn't addressed the glaring absence of an out-of-band gpg key validation. Fortunately, I downloaded it (TOFU) in my personal key ring
user@personal:~$ gpg --list-keys docker
pub   rsa4096/0xC52FEB6B621E9F35 2017-02-22 [SCEA]
	  Key fingerprint = 060A 61C5 1B55 8A7F 742B  77AA C52F EB6B 621E 9F35
uid                   [ unknown] Docker Release (CE rpm) <docker@docker.com>

user@personal:~$ 
  1. I downloaded it from their server on the staging server, and confirmed the fingerprint matches
[root@osestaging1 ~]# wget https://download.docker.com/linux/centos/gpg
--2020-03-08 18:13:22--  https://download.docker.com/linux/centos/gpg
Resolving download.docker.com (download.docker.com)... 99.86.3.124, 99.86.3.98, 99.86.3.52, ...
Connecting to download.docker.com (download.docker.com)|99.86.3.124|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1627 (1.6K) [binary/octet-stream]
Saving to: ‘gpg’

100%[======================================================================================================================>] 1,627       --.-K/s   in 0s      

2020-03-08 18:13:22 (42.4 MB/s) - ‘gpg’ saved [1627/1627]

[root@osestaging1 ~]# gpg --import gpg
gpg: key 621E9F35: public key "Docker Release (CE rpm) <docker@docker.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
[root@osestaging1 ~]# gpg --list-keys --fingerprint
/root/.gnupg/pubring.gpg
------------------------
pub   4096R/621E9F35 2017-02-22
	  Key fingerprint = 060A 61C5 1B55 8A7F 742B  77AA C52F EB6B 621E 9F35
uid                  Docker Release (CE rpm) <docker@docker.com>

[root@osestaging1 ~]# mv gpg /etc/pki/rpm-gpg/docker.gpg
[root@osestaging1 ~]# 
  1. and now to install docker
[root@osestaging1 ~]# chown root:root /etc/pki/rpm-gpg/docker.gpg
[root@osestaging1 ~]# chmod 0644 /etc/pki/rpm-gpg/docker.gpg
[root@osestaging1 ~]# # and install the repo
[root@osestaging1 ~]# cat << EOF > /etc/yum.repos.d/docker-ce.repo 
> [docker-ce-stable]
> name=Docker CE Stable - $basearch
> baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
> enabled=1
> gpgcheck=1
> gpgkey=file:///etc/pki/rpm-gpg/docker.gpg
> EOF
[root@osestaging1 ~]# 
[root@osestaging1 ~]# # finally, install docker from the repos
[root@osestaging1 ~]# yum install docker-ce
  1. ah, that's a bug in the install guide; it tried to resolve $basearch
[root@osestaging1 ~]# cat /etc/yum.repos.d/docker-ce.repo 
[docker-ce-stable]
name=Docker CE Stable - 
baseurl=https://download.docker.com/linux/centos/7//stable
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/docker.gpg
[root@osestaging1 ~]# 
  1. let's do that again so it's literal. this works.
[root@osestaging1 ~]# 
[root@osestaging1 ~]# cat > /etc/yum.repos.d/docker-ce.repo  <<'EOF'
> [docker-ce-stable]
> name=Docker CE Stable - $basearch
> baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
> enabled=1
> gpgcheck=1
> gpgkey=file:///etc/pki/rpm-gpg/docker.gpg
> EOF
[root@osestaging1 ~]# cat /var/log/docker/prune.log 
cat: /var/log/docker/prune.log: No such file or directory
[root@osestaging1 ~]# cat /etc/yum.repos.d/docker-ce.repo 
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/docker.gpg
[root@osestaging1 ~]# 
  1. and to install
[root@osestaging1 ~]# yum install docker-ce
...
Install  1 Package  (+ 3 Dependent packages)
Upgrade             ( 11 Dependent packages)

Total download size: 98 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/15): container-selinux-2.107-3.el7.noarch.rpm                                                                                         |  39 kB  00:00:00
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/docker-ce-19.03.7-3.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY:00:01 ETA
Public key for docker-ce-19.03.7-3.el7.x86_64.rpm is not installed
(2/15): docker-ce-19.03.7-3.el7.x86_64.rpm                                                                                               |  25 MB  00:00:01
(3/15): containerd.io-1.2.13-3.1.el7.x86_64.rpm                                                                                          |  23 MB  00:00:01
(4/15): libselinux-2.5-14.1.el7.x86_64.rpm                                                                                               | 162 kB  00:00:00
(5/15): libselinux-python-2.5-14.1.el7.x86_64.rpm                                                                                        | 235 kB  00:00:00
(6/15): libselinux-utils-2.5-14.1.el7.x86_64.rpm                                                                                         | 151 kB  00:00:00
(7/15): libsemanage-2.5-14.el7.x86_64.rpm                                                                                                | 151 kB  00:00:00
(8/15): libsemanage-python-2.5-14.el7.x86_64.rpm                                                                                         | 113 kB  00:00:00
(9/15): policycoreutils-2.5-33.el7.x86_64.rpm                                                                                            | 916 kB  00:00:00
(10/15): libsepol-2.5-10.el7.x86_64.rpm                                                                                                  | 297 kB  00:00:00
(11/15): policycoreutils-python-2.5-33.el7.x86_64.rpm                                                                                    | 457 kB  00:00:00
(12/15): setools-libs-3.3.8-4.el7.x86_64.rpm                                                                                             | 620 kB  00:00:00
(13/15): selinux-policy-3.13.1-252.el7_7.6.noarch.rpm                                                                                    | 492 kB  00:00:00
(14/15): selinux-policy-targeted-3.13.1-252.el7_7.6.noarch.rpm                                                                           | 7.0 MB  00:00:00
(15/15): docker-ce-cli-19.03.7-3.el7.x86_64.rpm                                                                                          |  40 MB  00:00:01
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                            33 MB/s |  98 MB  00:00:02
Retrieving key from file:///etc/pki/rpm-gpg/docker.gpg
Importing GPG key 0x621E9F35:
 Userid     : "Docker Release (CE rpm) <docker@docker.com>"
 Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
 From       : /etc/pki/rpm-gpg/docker.gpg
Is this ok [y/N]: y
...
Installed:
  docker-ce.x86_64 3:19.03.7-3.el7                                                                                                                              

Dependency Installed:
  container-selinux.noarch 2:2.107-3.el7                containerd.io.x86_64 0:1.2.13-3.1.el7                docker-ce-cli.x86_64 1:19.03.7-3.el7               

Dependency Updated:
  libselinux.x86_64 0:2.5-14.1.el7                           libselinux-python.x86_64 0:2.5-14.1.el7           libselinux-utils.x86_64 0:2.5-14.1.el7           
  libsemanage.x86_64 0:2.5-14.el7                            libsemanage-python.x86_64 0:2.5-14.el7            libsepol.x86_64 0:2.5-10.el7                     
  policycoreutils.x86_64 0:2.5-33.el7                        policycoreutils-python.x86_64 0:2.5-33.el7        selinux-policy.noarch 0:3.13.1-252.el7_7.6       
  selinux-policy-targeted.noarch 0:3.13.1-252.el7_7.6        setools-libs.x86_64 0:3.3.8-4.el7                

Complete!
[root@osestaging1 ~]# 
  1. And I enabled the docker service
[root@osestaging1 ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@osestaging1 ~]# systemctl start docker.service
[root@osestaging1 ~]#
  1. Now for Discourse; first we get the repo
[root@osestaging1 ~]# sudo -s
[root@osestaging1 ~]# git clone https://github.com/discourse/discourse_docker.git /var/discourse
Cloning into '/var/discourse'...
remote: Enumerating objects: 45, done.
remote: Counting objects: 100% (45/45), done.
remote: Compressing objects: 100% (37/37), done.
remote: Total 4663 (delta 19), reused 22 (delta 8), pack-reused 4618
Receiving objects: 100% (4663/4663), 1021.70 KiB | 0 bytes/s, done.
Resolving deltas: 100% (2984/2984), done.
[root@osestaging1 ~]# cd /var/discourse
[root@osestaging1 discourse]# 
  1. interesting, the next install command failed as there's no app.yml file yet. where does that come from?
[root@osestaging1 discourse]# mv containers/app.yml containers/discorse_ose.yml
mv: cannot stat ‘containers/app.yml’: No such file or directory
[root@osestaging1 discourse]# ls
bin  cids  containers  discourse-doctor  discourse-setup  image  launcher  LICENSE  README.md  samples  scripts  shared  templates  Vagrantfile
[root@osestaging1 discourse]# ls containers/
[root@osestaging1 discourse]# 
  1. So from the install guide, we could generate another 'app.yml' file from running the buggy ./discourse-setup script that has no support for default postfix installs that don't have auth (since they're just running locally on 127.0.0.1 only), and therefore we can't use it. https://github.com/discourse/discourse/blob/master/docs/INSTALL-cloud.md
  2. I tried a bootstrap and waited >10 minutes, but it didn't appear to do a damn thing
[root@osestaging1 discourse]# time ./launcher bootstrap discourse_ose
^C

real    13m18.545s
user    0m0.011s
sys     0m0.020s
[root@osestaging1 discourse]# 
  1. I tried to proceed with the damn 'discourse-setup' script, but it soon committed suicide when trying to do a stupid check on the hostnae I gave it. Yeah, I know, it's a staging server behind a VPN. You can't access it. Just fucking build my file you buggy piece of shit
[root@osestaging1 discourse]# ./discourse-setup
which: no docker.io in (/sbin:/bin:/usr/sbin:/usr/bin)
which: no docker.io in (/sbin:/bin:/usr/sbin:/usr/bin)
./discourse-setup: line 275: netstat: command not found
./discourse-setup: line 275: netstat: command not found
Ports 80 and 443 are free for use
‘samples/standalone.yml’ -> ‘containers/app.yml’
Found 1GB of memory and 1 physical CPU cores
setting db_shared_buffers = 128MB
setting UNICORN_WORKERS = 2
containers/app.yml memory parameters updated.

Hostname for your Discourse? [discourse.example.com]: discourse.opensourceecology.org

Checking your domain name . . .
WARNING:: This server does not appear to be accessible at discourse.opensourceecology.org:443.

A connection to http://discourse.opensourceecology.org (port 80) also fails.

This suggests that discourse.opensourceecology.org resolves to the wrong IP address
or that traffic is not being routed to your server.

Google: "open ports YOUR CLOUD SERVICE" for information for resolving this problem.

If you want to proceed anyway, you will need to
edit the containers/app.yml file manually.
[root@osestaging1 discourse]# 
  1. anyway, it created the app.yml file
[root@osestaging1 discourse]# ls containers/
app.yml
[root@osestaging1 discourse]# 
  1. I updated our install guide on the wiki. now we continue with renaming it
[root@osestaging1 discourse]# mv containers/app.yml containers/discorse_ose.yml
[root@osestaging1 discourse]# 
  1. I updated the documentation here to make this next step more robust with sed
[root@osestaging1 discourse]# sed --in-place=.`date "+%Y%m%d_%H%M%S"` 's%^\([^#]*\)\(DISCOURSE_SMTP.*\)$%\1#\2%' /var/discourse/containers/discourse_ose.yml
[root@osestaging1 discourse]# grep 'DISCOURSE_SMTP_ADDRESS: 172.17.0.1' /var/discourse/containers/discourse_ose.yml || sed --in-place=.`date "+%Y%m%d_%H%M%S"` 's%^env:$%env:\n  DISCOURSE_SMTP_ADDRESS: 172.17.0.1 # this is the IP Address of the host server on the docker0 interface\n  DISCOURSE_SMTP_PORT: 25\n  DISCOURSE_SMTP_AUTHENTICATION: none\n  DISCOURSE_SMTP_OPENSSL_VERIFY_MODE: none\n  DISCOURSE_SMTP_ENABLE_START_TLS: false\n%' /var/discourse/containers/discourse_ose.yml
[root@osestaging1 discourse]# 
  1. then I followed my guide to setup postfix to work with the docker container
[root@osestaging1 discourse]# grep -E 'mynetworks|interfaces' /etc/postfix/main.cf
# The inet_interfaces parameter specifies the network interface
# the software claims all active interfaces on the machine. The
# See also the proxy_interfaces parameter, for network addresses that
#inet_interfaces = all
#inet_interfaces = $myhostname
#inet_interfaces = $myhostname, localhost
inet_interfaces = localhost
# The proxy_interfaces parameter specifies the network interface
# the address list specified with the inet_interfaces parameter.
#proxy_interfaces =
#proxy_interfaces = 1.2.3.4
# receives mail on (see the inet_interfaces parameter).
# to $mydestination, $inet_interfaces or $proxy_interfaces.
# ${proxy,inet}_interfaces, while $local_recipient_maps is non-empty
# The mynetworks parameter specifies the list of "trusted" SMTP
# By default (mynetworks_style = subnet), Postfix "trusts" SMTP
# On Linux, this does works correctly only with interfaces specified
# Specify "mynetworks_style = class" when Postfix should "trust" SMTP
# mynetworks list by hand, as described below.
# Specify "mynetworks_style = host" when Postfix should "trust"
#mynetworks_style = class
#mynetworks_style = subnet
mynetworks_style = host
# Alternatively, you can specify the mynetworks list by hand, in
# which case Postfix ignores the mynetworks_style setting.
#mynetworks = 168.100.189.0/28, 127.0.0.0/8
#mynetworks = $config_directory/mynetworks
#mynetworks = hash:/etc/postfix/network_table
# - from "trusted" clients (IP address matches $mynetworks) to any destination,
# - destinations that match $inet_interfaces or $proxy_interfaces,
# unknown@[$inet_interfaces] or unknown@[$proxy_interfaces] is returned
[root@osestaging1 discourse]#
[root@osestaging1 discourse]# ls -lah /etc/postfix/main.cf*
-rw-r--r--. 1 root root 27K Mar 19  2019 /etc/postfix/main.cf
-rw-r--r--. 1 root root 27K Jul 10  2017 /etc/postfix/main.cf.20170710.orig
-rw-r--r--. 1 root root 27K Mar 17  2019 /etc/postfix/main.cf.20190317
[root@osestaging1 discourse]# grep 'mynetworks = 127.0.0.0/8, 172.17.0.0/16' /etc/postfix/main.cf || sed --in-place=.`date "+%Y%m%d_%H%M%S"` 's%^mynetworks_style = host$%#mynetworks_style = host\nmynetworks = 127.0.0.0/8, 172.17.0.0/16%' /etc/postfix/main.cf
[root@osestaging1 discourse]# ls -lah /etc/postfix/main.cf*                                                                                                     -rw-r--r--. 1 root root 27K Mar  8 19:23 /etc/postfix/main.cf
-rw-r--r--. 1 root root 27K Jul 10  2017 /etc/postfix/main.cf.20170710.orig
-rw-r--r--. 1 root root 27K Mar 17  2019 /etc/postfix/main.cf.20190317
-rw-r--r--. 1 root root 27K Mar 19  2019 /etc/postfix/main.cf.20200308_192321
[root@osestaging1 discourse]# grep -E 'mynetworks|interfaces' /etc/postfix/main.cf                                                                              # The inet_interfaces parameter specifies the network interface
# the software claims all active interfaces on the machine. The
# See also the proxy_interfaces parameter, for network addresses that
#inet_interfaces = all
#inet_interfaces = $myhostname
#inet_interfaces = $myhostname, localhost
inet_interfaces = localhost   
# The proxy_interfaces parameter specifies the network interface
# the address list specified with the inet_interfaces parameter.
#proxy_interfaces =
#proxy_interfaces = 1.2.3.4   
# receives mail on (see the inet_interfaces parameter).
# to $mydestination, $inet_interfaces or $proxy_interfaces.
# ${proxy,inet}_interfaces, while $local_recipient_maps is non-empty
# The mynetworks parameter specifies the list of "trusted" SMTP
# By default (mynetworks_style = subnet), Postfix "trusts" SMTP
# On Linux, this does works correctly only with interfaces specified
# Specify "mynetworks_style = class" when Postfix should "trust" SMTP
# mynetworks list by hand, as described below.
# Specify "mynetworks_style = host" when Postfix should "trust"
#mynetworks_style = class
#mynetworks_style = subnet
#mynetworks_style = host
mynetworks = 127.0.0.0/8, 172.17.0.0/16
# Alternatively, you can specify the mynetworks list by hand, in
# which case Postfix ignores the mynetworks_style setting.
#mynetworks = 168.100.189.0/28, 127.0.0.0/8
#mynetworks = $config_directory/mynetworks
#mynetworks = hash:/etc/postfix/network_table
# - from "trusted" clients (IP address matches $mynetworks) to any destination,
# - destinations that match $inet_interfaces or $proxy_interfaces,
# unknown@[$inet_interfaces] or unknown@[$proxy_interfaces] is returned
[root@osestaging1 discourse]# 
  1. I continued to improve the install documentation for setting up the nginx socket step
[root@osestaging1 discourse]# grep 'templates/web.socketed.template.yml' /var/discourse/containers/discourse_ose.yml || sed --in-place=.`date "+%Y%m%d_%H%M%S"` 's%^templates:$%templates:\n  - "templates/web.socketed.template.yml"%' /var/discourse/containers/discourse_ose.yml
[root@osestaging1 discourse]# grep -C2 templates /var/discourse/containers/discourse_ose.yml
## visit http://www.yamllint.com/ to validate this file as needed

templates:
  - "templates/web.socketed.template.yml"
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
  - "templates/web.ratelimited.template.yml"
## Uncomment these two lines if you wish to add Lets Encrypt (https)
  #- "templates/web.ssl.template.yml"
  #- "templates/web.letsencrypt.ssl.template.yml"

## which TCP/IP ports should this container expose?
[root@osestaging1 discourse]# perl -i".`date "+%Y%m%d_%H%M%S"`" -p0e 's%expose:\n  -([^\n]*)\n  -([^\n]*)%#expose:\n#  -\1\n#  -\2%gs' /var/discourse/containers/discourse_ose.yml
[root@osestaging1 discourse]# grep -C4 expose /var/discourse/containers/discourse_ose.yml
## Uncomment these two lines if you wish to add Lets Encrypt (https)
  #- "templates/web.ssl.template.yml"
  #- "templates/web.letsencrypt.ssl.template.yml"

## which TCP/IP ports should this container expose?
## If you want Discourse to share a port with another webserver like Apache or nginx,
## see https://meta.discourse.org/t/17247 for details
#expose:
#  - "80:80"   # http
#  - "443:443" # https

params:
[root@osestaging1 discourse]# 
  1. continuing with my guide, I setup nginx to have mod_security
[root@osestaging1 discourse]# cd /var/discourse/image/base
[root@osestaging1 base]# cp install-nginx install-nginx.`date "+%Y%m%d_%H%M%S"`.orig
[root@osestaging1 base]# # add a block to checkout the the modsecurity nginx module just before downloading the nginx source
[root@osestaging1 base]# grep 'ModSecurity' install-nginx || sed -i 's%\(curl.*nginx\.org/download.*\)%# mod_security --maltfield\napt-get install -y libmodsecurity-dev modsecurity-crs\ncd /tmp\ngit clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git\n\n\1%' install-nginx
[root@osestaging1 base]# 
[root@osestaging1 base]# # update the configure line to include the ModSecurity module checked-out above
[root@osestaging1 base]# sed -i '/ModSecurity/! s%^[^#]*./configure \(.*nginx.*\)%#./configure \1\n./configure \1 --add-module=/tmp/ModSecurity-nginx%' install-nginx
[root@osestaging1 base]# 
[root@osestaging1 base]# # add a line to cleanup section
[root@osestaging1 base]# grep 'rm -fr /tmp/ModSecurity-nginx' install-nginx || sed -i 's%\(rm -fr.*/tmp/nginx.*\)%rm -fr /tmp/ModSecurity-nginx\n\1%' install-nginx
[root@osestaging1 base]# 
[root@osestaging1 base]# cat install-nginx
#!/bin/bash
set -e
VERSION=1.17.4
cd /tmp

apt install -y autoconf  


git clone https://github.com/bagder/libbrotli
cd libbrotli
./autogen.sh
./configure
make install

cd /tmp


# this is the reason we are compiling by hand...
git clone https://github.com/google/ngx_brotli.git

# mod_security --maltfield
apt-get install -y libmodsecurity-dev modsecurity-crs
cd /tmp
git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git

curl -O https://nginx.org/download/nginx-$VERSION.tar.gz
tar zxf nginx-$VERSION.tar.gz
cd nginx-$VERSION

# so we get nginx user and so on
apt install -y nginx libpcre3 libpcre3-dev zlib1g zlib1g-dev
# we don't want to accidentally upgrade nginx and undo our work
apt-mark hold nginx

# now ngx_brotli has brotli as a submodule
cd /tmp/ngx_brotli && git submodule update --init && cd /tmp/nginx-$VERSION

# ignoring depracations with -Wno-deprecated-declarations while we wait for this https://github.com/google/ngx_brotli/issues/39#issuecomment-254093378
#./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli
./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_v2_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/tmp/ngx_brotli --add-module=/tmp/ModSecurity-nginx

make install

mv /usr/share/nginx/sbin/nginx /usr/sbin

cd /
rm -fr /tmp/ModSecurity-nginx
rm -fr /tmp/nginx
rm -fr /tmp/libbrotli
rm -fr /tmp/ngx_brotli
rm -fr /etc/nginx/modules-enabled/*
[root@osestaging1 base]#
[root@osestaging1 base]# cd /var/discourse
[root@osestaging1 discourse]# 
[root@osestaging1 discourse]# # replace the line "image="discourse/base:<version>" with 'image="discourse_ose"'
[root@osestaging1 discourse]# grep 'discourse_ose' launcher || sed --in-place=.`date "+%Y%m%d_%H%M%S"` '/base_image/! s%^\(\s*\)image=\(.*\)$%#\1image=\2\n\1image="discourse_ose"%' launcher
[root@osestaging1 discourse]# 
  1. I added to the documentation a step to verify the command above
[root@osestaging1 discourse]# grep 'image=' launcher
user_run_image=""
	user_run_image="$2"
#image="discourse/base:2.0.20200220-2221"
image="discourse_ose"
  run_image=`cat $config_file | $docker_path run $user_args --rm -i -a stdin -a stdout $image ruby -e \
	run_image=$user_run_image
	run_image="$local_discourse/$config"
  base_image=`cat $config_file | $docker_path run $user_args --rm -i -a stdin -a stdout $image ruby -e \
	image=$base_image
[root@osestaging1 discourse]# 
  1. And I updated the documentation to prefix the next command with 'time nice'. Unfortunately, it's just hanging doing nothing.
[root@osestaging1 discourse]# time nice docker build --tag 'discourse_ose' /var/discourse/image/base/

^C

real    8m3.752s
user    0m0.139s
sys     0m0.084s
[root@osestaging1 discourse]# 
  1. Even `docker info` is hanging indefinitely. I'll have to look into this tomorrow.
[root@osestaging1 discourse]# docker info









^C
[root@osestaging1 discourse]# 

Fri Mar 06, 2020

  1. our application for ose.discourse.group was approved. I followed the wizard for the basic setup
  2. I sent an email asking about domain redirection post-outgrowth of this plan (that currently only supports 5G storage, 5x staff, etc
  3. They responded stating that there would be no redirection option, and the best bet would be to host it ourselves or pay them $50/month (which is their 50% discount!) to host it for us. I never want to put us in such a dependent position, so self-hosting it will be..

Thr Mar 05, 2020

  1. I submitted a request for a free not-self-hosted version of Discourse https://free.discourse.group/
Q: Source Code or Crowdfunding Profile Link
A: Most of our open source releases are hardware (we build open-source industrial machines) and the relevant freecad files and guides are CC-BY-SA and self-hosted on our wiki (wiki.opensourceecology.org). All of our code is also GPL'd, but that's just a small subset of our open-source work
...
Q: Why do you need Discourse?
A: We've self-hosted our forums on vanilla before, but we've had issues with spam and moderation. We hope that Discourse will be a better platform that'll reduce our administration time required to create a place that facilitates healthy discussions among our community.

We're doing production runs on some of our machines and gearing up for distribution & sales. A component of that is support, and we wish to use Discourse to facilitate supporting our customer's purchases.

But we will also likely use it for development discussions and as a replacement for wordpress/disqus in our wordpress and wiki sites as well.
  1. I plan to follow-up and inquire about "growing out" of their free plan. Specifically, if we ever need more disk space or bandwidth, I want to make sure that either [a] we can use a custom subdomain (ie 'forum.opensourceecology.org') that we control or [b] they can commit to doing some redirects to *our* domain from *their* domain after we leave their servers..
  2. I sent a follow-up with more links, but I should have also included this link to our CEB controller's code https://github.com/OpenSourceEcology/OSE_CEB_Press_v19.01/blob/master/CEB_Control_Code_v1901.ino

Thr Feb 13, 2020

  1. Ugh, I just got an (encrypted) email from OSSEC showing a diff of our cert.pem file that changed.
  2. I should have already disabled this with this line present in /var/ossec/etc/ossec.conf
<nodiff>/etc/letsencrypt/live/</nodiff>
<nodiff>/root/backups/*.key</nodiff>
  1. Not sure why it isn't working. Here's the files that were diff'd
    1. /etc/letsencrypt/live/openbuildinginstitute.org/cert.pem
    2. /etc/letsencrypt/live/opensourceecology.org/cert.pem
  2. I went ahead and added this to ossec.conf & gave ossec a restart *shrug*
<nodiff>cert.pem</nodiff>
  1. ...
  2. I also had an issue with phplist forbidden errors caused by this false-positive. I fixed it by whitelisting 960020 in /etc/httpd/conf.d/00-phplist.opensourceecology.org.conf & restarting apache
Message: Access denied with code 403 (phase 2). String match "HTTP/1.1" at REQUEST_PROTOCOL. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "399"] [id "960020"] [rev "2"] [msg "Pragma Header requires Cache-Control Header for HTTP/1.1 requests."] [severity "NOTICE"] [ver "OWASP_CRS/2.2.9"] [maturity "6"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/INVALID_HREQ"]             
Action: Intercepted (phase 2)
Stopwatch: 1581577253790393 8718 (- - -)
Stopwatch2: 1581577253790393 8718; combined=353, p1=324, p2=15, p3=0, p4=0, p5=14, sr=65, sw=0, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache
Engine-Mode: "ENABLED"

Tue Feb 04, 2020

  1. sent Marcin an email saying that I will have limited time to contribute to OSE this year, and stated that I could commit to [a] finishing discourse, [b] training a replacement for me, and [c] serving only as an advisor for a few hours per month going forward after that

Sun Jan 26, 2020

  1. documentation, emails

Sat Jan 25, 2020

  1. Today I met with Satprem Maïni, director of the Auroville Earth Institute at their office in the CSR complex in Auroville, India. Marcin joined us via video call.
  2. A few notes about our meeting:
    1. Satprem was not familiar with OSE (but Richard--who I met earlier but was not present, was familiar with our wiki)
    2. We invited Satprem to join us for Summer_of_Extreme_Design-Build_2020
    3. Satprem showed me a document titled "Services Offered - 2020" which states that their consultancy fees are 450 EUR per person per day for less than a week and 350 EUR per person per day for more than a week. He said that this is negotiable for more than one week. Marcin asked for two weeks.
    4. We would also be responsible for paying for all expenses, including "flight in economy class, visa expenses, local transportation, boarding and lodging, laundry, food, water, etc"
    5. So I guess (if Satprem would be OK to stay at FeF), we'd be looking at roughly ~$7-$15k for this

...

  1. I asked Satprem about capillary action. He said if water gets into your CSEBs via capillary action, you have a design flaw. He stated that they've also built in very cold climates where it freezes
  2. I asked about the ~22m diameter CSEB dome they built. He said that they never waterproofed the roof, and it does leak. He said he hasn't visited it in a while, but last he checked they put coconut leaf roof over the dome--which is the traditional natural roof used here in Tamil Nadu
  3. Satprem also said that 2 weeks wouldn't be enough time to design & build a machine. We explained our swarm building approach.
  4. Marcin asked Satprem about his experience designing machines. He said that he doesn't have that skill, and that Aureka mostly designs their machines. I asked about Richard, and he said that it was still mostly an expertise possessed by Aureka. Marcin asked if Satprem would be interested in learning this, and Satprem said he has too much else to work on to focus on learning how to design machines.
  5. ...
  6. I asked Satprem about their open-source licensing of their machines. He said that they keep the designs for their AuroPress private within Auroville only. The auram press is made by Aureka.
  7. I asked Satprem about the books they've published, and he said that we could get the PDFs if we purchased them. He also gave us permission to publish the PDF on our wiki, including a spreadsheet that comes on a CD with one of the books.
  8. I told Satprem I would be happy to buy all of AVEI's books, but he suggested that I only buy three of them, so I bought these three:
    1. File:AVEI economic feasibility to set up a cseb unit.pdf
      1. this book includes a CD titled "Economics of CSEB" with a spreadsheet titled File:Intro 11 - CSEB Economics.xls
    2. File:AVEI building with arches vaults and domes.pdf
    3. File:AVEI production and use of compressed stabalized earth blocks.pdf

Mon Jan 20, 2020

  1. Emailed Satprem (AVEI director) asking his availability to meet with Marcin
Hi Satprem,

Would you be available to meet for an online video chat later this week
with our Director, Marcin Jakubowski?

My name is Michael. I am working with Open Source Ecology. We are
currently designing open-source blueprints for civilization. We also
work with Open Building Institute.

 * https://www.opensourceecology.org
 * https://www.openbuildinginstitute.org

One of the machines that we develop is an Earth Brick Press for making CEBs.

 * https://wiki.opensourceecology.org/wiki/CEB_Press

We'd like to invite you to visit us in the US in August. Would you be
available at any of the following times to discuss this further on an
internet video call?

 1. Thursday morning at 07:30 or 08:30 IST
 2. Friday morning after 10:30 IST
 3. Saturday morning at 07:30 or 08:30 IST

The meeting will be with Marcin Jakubowski (Director, Open Source
Ecology)--who will be joining remotely from the US--and myself. I'm
currently in Auroville, and I'd like to physically be present at the
AVEI office to meet you in person and to assist with any potential
technical difficulties.

Please let me know if any of these times work for you.


Thank you,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org

Sun Jan 19, 2020

  1. downloaded content for meeting with Satprem to phone

Sat Jan 18, 2019

  1. review of OSE CEB Press videos
  2. created table comparing OSE CEB Press to AVEI's

Fri Jan 17, 2019

  1. review of OSE CEB Press videos

Thr Jan 16, 2020

  1. email

Wed Jan 15, 2020

  1. uploading research & documentation on Auroville Earth Institute and their machines
  2. emails

Tue Jan 14, 2020

  1. OSE & AVEI brick press research

Mon Jan 13, 2020

  1. Researching OSE & AVEI brick presses
  2. Preparing notes for meeting with AVEI director https://wiki.opensourceecology.org/wiki/Visiting_Auroville_2020#Auroville_Earth_Institute

Sun Jan 12, 2020

  1. Documentation of the Auroville Earth Institute office & their Auroville Earth Institute (exhibition room).

Sat Jan 11, 2020

  1. Visited the Auroville Earth Institute office & their Auroville Earth Institute (exhibition room).

Thr Jan 02, 2020

  1. Marcin asked why www.opensourceecology.org takes 5 seconds to load and what we can do to speed it up
  2. I did some quick checking. There's a pretty absured number of assets requied to be downloaded to render our site. I did a quick search for "async" in the source code and couldn't find a single one. To fix this, here's my checklist TODO (but I doubt it's higher priority than Discourse; I asked for clairification from Marcin):
    1. Do a page load from a fresh session and watch the varnish logs for the client's IP. Verify that 100% of the assets requested on the front page are returned from the varnish cache. This essentially eliminates the fact that our server is running slow due to code, db calls, etc. This should be done first. We have tons of RAM, so if any assets are not there; we should modify the varnish config to make sure 100% of all front-page assets get cached by varnish.
    2. We should enumerate the assets from above and try disabling them in the browser one-by-one to see the effect. If any can be safely eliminated on all our sites, we should cut them out. This is not a trivial process, and it may be deferred until sufficient time to focus on this task is permitted.
    3. For remaining assets that are necessary, we'll want to minify and async them. We should research a very simple/fast/lightweight wordpress plugin that will do this for us in such a way that it itself creates a cache of the minified files so it's not wasting time doing the minification every time we have to make a call to the backend.
    4. We should research our CDN options. This should be taken seriously, as once we start using a CDN we could end-up injecting malicious content into our site that could be used to hack our users. Therefore, the CDN must be trustworthy so they won't be themselves malicious or neglegent enough to let themselves be unknowingly hacked by a malicous 3rd party.
    5. I sent an email to Fastly about their free CDN service for non-profits; more reasearch is needed though.
  3. Here's my email response
Without any investigation at all, I would ignorantly assume that this
may be caused by loading some large static asset dependencies, such as
javascript, css, images, etc that are blocking the content from arriving
to the browser.

If I'm correct, the solution is usually to:

 1. cut out unnecessary dependencies,
 2. "minify" any required dependencies (removes comments & whitespace
	and compresses them to decrease their size),
 3. make those resources "asynchronous/non-blocking" if possible (so
	they load *after* the page loads),
 4. cache them on the server side (varnish; may already be done, but
	validation would be necessary),
 5. tell browsers to cache them on the client side, and
 6. put any static assets we can on a super-fast CDN.

To determine if I'm correct, we can run the site through some online
load speed tools. They will usually load the site in a virtual browser,
enumerate the assets that were required, and show you the time required
to download the assets and render the page. I just did this with Google
Page Insights. Here's the results:

 *
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.opensourceecology.org

The above page suggests some of the resources blocking our page load
include twitter, bootstrap (probably required by the wordpress theme
we're using), and jquery. Our non-CDN images look pretty small & sane
as-is. And we may be able to remove some CSS files. Maybe.

The lowest hanging fruit is probably:

 1. Test enabling gzip compression encoding in apache (and maybe needed
	in vanish/nginx too)

And the next-lowest fruit is:

 1. Research CDN options. We may be eligible for free service as a
	non-profit.

I just emailed fastly, which a quick google suggested they provide free
CDN for non-profits. There's probably other options. And we should do
our research on this. We don't want a "free" CDN service that (either
intentionally or because they were hacked) does some sketchy injection
of malicious content into our site's resources. That's essentially how
thousands of e-commerce sites were hacked and had hundreds of thousands
of credit cards stolen in the Magecart web skimming scandal in 2018..

 * https://en.wikipedia.org/wiki/Web_skimming

The more direct but more time consuming solution is sifting through all
the shit that our theme is using and cutting out the unnecessary fat and
minifying the rest.

Please let me know the priority of this task compared to the Discourse POC.


Cheers,

Michael Altfield
Senior System Administrator
PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7  70D2 AA3E DF71 60E2 D97B

Open Source Ecology
www.opensourceecology.org