Maltfield Log/2018 Q2

From Open Source Ecology
Jump to: navigation, search

My work log from the year 2018 Quarter 2. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.

See Also

  1. Maltfield_Log
  2. User:Maltfield
  3. Special:Contributions/Maltfield

Sat Jun 30, 2018

  1. dug through my old emails to find the extensions that Marcin & Michel wanted to add to our wiki; it's a 3d modeler that allows viewers to manipulate our stl files in real-time on the site using WebGL https://www.mediawiki.org/wiki/Extension:3DAlloy
    1. before we couldn't get it because our version of mediawiki was too old. It's a lower priority right now for me (next to documentation, scaleable video conferencing solution, backups to s3, & deprecation of hetzner1, but it
    2. I updated our mediawiki article, adding this to the "proposed extensions" list https://wiki.opensourceecology.org/wiki/Mediawiki#Proposed
  2. found a more robust way to resize images when the images are a mixed batch of orientations (landscape vs portrait) & different sizes https://stackoverflow.com/questions/6384729/only-shrink-larger-images-using-imagemagick-to-a-ratio
# most robust; handles portrait & landscape images of various dimensions
# 786433 = 1024*768+1
find . -maxdepth 1 -iname '*.jpg' -exec convert {} -resize '786433@>' {} \;
    1. I updated our documentation on this https://wiki.opensourceecology.org/wiki/Batch_Resize
  1. uploaded my two photos from the D3D Extruder build (Modified Prusa i3 MK2) https://wiki.opensourceecology.org/wiki/D3D_Extruder#2018-06_Notes

Tue Jun 26, 2018

  1. Marcin pointed out that many of the extremely large images on the wiki display an error when mediawiki attempts to generate a thumbnail for them = "Error creating thumbnail: File with dimensions greater than 12.5 MP" https://wiki.opensourceecology.org/wiki/High_Resolution_GVCS_Media
    1. for example https://wiki.opensourceecology.org/wiki/File:1day.jpg
    2. according to mediawiki, this issue can be resolved by setting the $wgMaxImageArea variable to a higher value https://www.mediawiki.org/wiki/Manual:Errors_and_symptoms#Error_creating_thumbnail:_File_with_dimensions_greater_than_12.5_MP
    3. the $wgMaxImageArea value is capped to prevent mediawiki from using too much RAM. Because we have way, way more memory than we need on our machine, there appears to be no risk in increasing this value. Moreover, it should only need to be done once as mediawiki generates these thumbnails & saves them as physical files in the 'images/thumb/ directory structure, for example 300px-Bhp1.jpg
    4. there was a note in the above wiki article mentioning that increasing the $wgMaxImageArea value may necessitate increasing the $wgMaxShellMemory value as well. But since we don't let php call to a shell (and therefore we use gd instead of imagemagick), this is not relevant to us anymore.
      1. ^ that said, I was digging in the LocalSettings.php file and found that this $wgMaxShellMemory value had been set (by a previous admin), mentioning this blog article in the preceding comment https://blog.breadncup.com/2009/12/01/thumbnail-image-memory-allocation-error-in-mediawiki/
      2. ^ that blog article mentions that one of the benefits of imagemagick is that it stores the image file separately. If that means that gd does *not* store the image file separately, then that's not very good.
      3. after spending some time poking around at the thumbnail-related wiki articles, I found nothing suggesting that gd does not create temporary files after generating the thumbnails https://www.mediawiki.org/wiki/Manual:Configuration_settings#Thumbnail_settings
      4. the best test of this would be to see if the example file above which is erroring out will show a thumbnail file after we fix the error. If we can then browse to a physical file on the disk for that thumbnail, then we know that gd does generate thumbnails one-time (or at least as a file cache until expiry) https://wiki.opensourceecology.org/wiki/File:1day.jpg
    5. I set the $wgMaxImageArea variable to '1.25e7' = 25 million pixels or 5000×5000 https://www.mediawiki.org/wiki/Manual:$wgMaxImageArea
      1. note that this is actually supposed to be the default value, but it fixed the error *shurg*
    6. after the change, the File:1day.jpg link refreshed to have thumbnails in-place of the errors. One of the thumbnails' image file was 120px-1day.jpg
    7. I confirmed that this was an actual file on the disk, and that it was just one of among 12 thumbnails
[root@hetzner2 wiki.opensourceecology.org]# ls -lah htdocs/images/thumb/f/fc/1day.jpg/
total 1.1M
drwxrwx---  2 apache apache 4.0K May 28  2015 .
drwxrwx--- 32 apache apache 4.0K Feb  4 14:56 ..
-rw-rw----  1 apache apache 149K May 28  2015 1000px-1day.jpg
-rw-rw----  1 apache apache 200K May 27  2015 1200px-1day.jpg
-rw-rw----  1 apache apache 4.0K Jan  9  2013 120px-1day.jpg
-rw-rw----  1 apache apache 324K May 27  2015 1599px-1day.jpg
-rw-rw----  1 apache apache 7.9K Jan  9  2013 180px-1day.jpg
-rw-rw----  1 apache apache  19K May 27  2015 300px-1day.jpg
-rw-rw----  1 apache apache  21K May 27  2015 320px-1day.jpg
-rw-rw----  1 apache apache  38K May 27  2015 450px-1day.jpg
-rw-rw----  1 apache apache  45K Dec 27  2013 500px-1day.jpg
-rw-rw----  1 apache apache  61K May 27  2015 600px-1day.jpg
-rw-rw----  1 apache apache  92K May 28  2015 750px-1day.jpg
-rw-rw----  1 apache apache 100K Jan  9  2013 800px-1day.jpg
[root@hetzner2 wiki.opensourceecology.org]# 
## indeed, subsequent refreshes of this page loaded fast (the first was very, very slow)
  1. ...
  1. uploaded photos of my jellbox build https://wiki.opensourceecology.org/wiki/Jellybox_1.3_Build_2018-05#Notes_on_my_Experience
    1. finished this & sent notice to Marcin
    2. next item on the documentation front is the d3d extruder photos. Then all the other misc photos.

Mon Jun 25, 2018

  1. so google finished processing our sitemap, and sent us an email on Saturday to notify us that it was done & had some errors
    1. 15 pages were "Submitted URL has crawl issue" (status = error)
    2. 2 pages were "Submitted URL blocked by robots.txt" (status = error)
    3. 2 pages were "Indexed, though blocked by robots.txt" (status = warning)
    4. 30299 pages were "Discovered - currently not indexed" (status = excluded)
    5. 1151 pages were "Crawled - currently not indexed" (status = excluded)
    6. 48 pages were "Excluded by ‘noindex’ tag" (status = excluded)
    7. 48 pages were "Blocked by robots.txt" (status = excluded)
    8. 32 pages were "Soft 404" (status = excluded)
    9. 11 pages were "Crawl anomaly" (status = excluded)
    10. 7 pages were "Submitted URL not selected as canonical" (status = excluded)
    11. 887 pages were "Submitted and indexed" (status = valid)
    12. 11 pages were "Indexed, not submitted in sitemap" (status = valid)
  2. so out of the 32,360 pages that we submitted to Google from the mediawiki-generated sitemap, it looks like only 887 got indexed
  3. there were only 2 error types covering 17 pages
    1. 15 pages were "Submitted URL has crawl issue" (status = error):
      1. https://wiki.opensourceecology.org/wiki/Agricultural_Robot
      2. https://wiki.opensourceecology.org/wiki/Practical_Post-Scarcity_Video
      3. https://wiki.opensourceecology.org/wiki/25%25_Discounts
      4. https://wiki.opensourceecology.org/wiki/User:Marcin
      5. https://wiki.opensourceecology.org/wiki/Michael_Log
      6. https://wiki.opensourceecology.org/wiki/Marcin_Biography/es
      7. https://wiki.opensourceecology.org/wiki/Tractor_Construction_Set_2017
      8. https://wiki.opensourceecology.org/wiki/CEB_Press_Fabrication_videos
      9. https://wiki.opensourceecology.org/wiki/Development_Team_Log
      10. https://wiki.opensourceecology.org/wiki/Tractor_User_Manual
      11. https://wiki.opensourceecology.org/wiki/Marcin_Biography
      12. https://wiki.opensourceecology.org/wiki/CNC_Machine
      13. https://wiki.opensourceecology.org/wiki/OSEmail
      14. https://wiki.opensourceecology.org/wiki/Donate
      15. https://wiki.opensourceecology.org/wiki/Hello
    2. 2 pages were "Submitted URL blocked by robots.txt" (status = error)
      1. https://wiki.opensourceecology.org/wiki/Template:Category%3DDealing_with_pests
      2. https://wiki.opensourceecology.org/wiki/Template:Category%3DPests_and_weeds
  4. for the second group, yes, we block robots.txt from accessing our Template files. I'm surprised the site index that mediawiki included this and that, if it did include it, it only had 2 templates. In any case, this is not an issue
user@ose:~$ curl https://wiki.opensourceecology.org/robots.txt
User-agent: *
Disallow: /index.php?
Disallow: /index.php/Help
Disallow: /index.php/Special:
Disallow: /index.php/Template
Disallow: /wiki/Help
Disallow: /wiki/Special:
Disallow: /wiki/Template
Crawl-delay: 15
user@ose:~$ 
  1. regarding the 15 pages with "Submitted URL has crawl issue" (status = error):
      1. https://wiki.opensourceecology.org/wiki/Agricultural_Robot
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      2. https://wiki.opensourceecology.org/wiki/Practical_Post-Scarcity_Video
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      3. https://wiki.opensourceecology.org/wiki/25%25_Discounts
        1. I attempted a "Fetch as Google," and I got an error. Indeed, I reproduced a 403.
user@ose:~$ curl -i "https://wiki.opensourceecology.org/wiki/25%25_Discounts"
HTTP/1.1 403 Forbidden
Server: nginx
Date: Mon, 25 Jun 2018 17:47:45 GMT
Content-Type: text/html; charset=iso-8859-1
Content-Length: 220
Connection: keep-alive
X-Varnish: 10297078 10201494
Age: 916
Via: 1.1 varnish-v4

<!DOCTYPE HTML PUBLIC "-IETFDTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /wiki/25%_Discounts
on this server.</p>
</body></html>
user@ose:~$ 
        1. I confirmed that nginx saw the resquest, and it is listed as a 403
[root@hetzner2 httpd]# tail -f /var/log/nginx/wiki.opensourceecology.org/access.log | grep -i '_Discounts'
65.49.163.89 - - [25/Jun/2018:17:53:56 +0000] "GET /wiki/25%25_Discounts HTTP/1.1" 403 220 "-" "curl/7.52.1" "-"
        1. digging deeper, I saw that varnish was also processing the request, and (doh!) it's registering a hit for a 403. It shouldn't be caching 403 errors!
[root@hetzner2 httpd]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 65.49.163.89'"
* << Request  >> 10297385
-   Begin          req 10297384 rxreq
-   Timestamp      Start: 1529949449.294763 0.000000 0.000000
-   Timestamp      Req: 1529949449.294763 0.000000 0.000000
-   ReqStart       127.0.0.1 48760
-   ReqMethod      GET
-   ReqURL         /wiki/25%25_Discounts
-   ReqProtocol    HTTP/1.0
-   ReqHeader      X-Real-IP: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: wiki.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      User-Agent: curl/7.52.1
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   Hit            10201494
-   VCL_call       HIT
-   VCL_return     deliver
-   RespProtocol   HTTP/1.1
-   RespStatus     403
-   RespReason     Forbidden
-   RespHeader     Date: Mon, 25 Jun 2018 17:32:29 GMT
-   RespHeader     Server: Apache
-   RespHeader     Content-Length: 220
-   RespHeader     Content-Type: text/html; charset=iso-8859-1
-   RespHeader     X-Varnish: 10297385 10201494
-   RespHeader     Age: 1499
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1529949449.294801 0.000038 0.000038
-   Debug          "RES_MODE 2"
-   RespHeader     Connection: close
-   Timestamp      Resp: 1529949449.294821 0.000058 0.000020
-   Debug          "XXX REF 2"
-   ReqAcct        234 0 234 226 220 446
-   End
        1. I'm using the default config from mediawiki's guide, but I guess they don't use mod_security https://www.mediawiki.org/wiki/Manual:Varnish_caching#Configuring_Varnish_4.[[1]]
        2. first, I need to reproduce the issue (clear the varnish cache, attempt to hit it, & get a 200 then clear the varnish cache again, attempt to hit it in some malicious way, get a 403, attempt to hit it in a non-malicious way, get a 403 again)
# I ran this on the server to clear the varnish cache just for the relevant page
[root@hetzner2 sites-enabled]# varnishadm 'ban req.url ~ "_Discounts"'

[root@hetzner2 sites-enabled]# 

# then I attempted to hit the server in an unmalicious way
user@ose:~$ curl -i "https://wiki.opensourceecology.org/wiki/25%25_Discounts"
HTTP/1.1 403 Forbidden
Server: nginx
Date: Mon, 25 Jun 2018 18:12:55 GMT
Content-Type: text/html; charset=iso-8859-1
Content-Length: 220
Connection: keep-alive
X-Varnish: 10190775
Age: 0
Via: 1.1 varnish-v4

<!DOCTYPE HTML PUBLIC "-IETFDTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /wiki/25%_Discounts
on this server.</p>
</body></html>
user@ose:~$ 
        1. In the above step, I still got the 403! Digging into varnish shows that it *did* miss the cache, then the backend responded again with a 403.
[root@hetzner2 httpd]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 65.49.163.89'"
* << Request  >> 10190775
-   Begin          req 10190774 rxreq
-   Timestamp      Start: 1529950375.885007 0.000000 0.000000
-   Timestamp      Req: 1529950375.885007 0.000000 0.000000
-   ReqStart       127.0.0.1 51724
-   ReqMethod      GET
-   ReqURL         /wiki/25%25_Discounts
-   ReqProtocol    HTTP/1.0
-   ReqHeader      X-Real-IP: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: wiki.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      User-Agent: curl/7.52.1
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   ExpBan         10201494 banned lookup
-   Debug          "XXXX MISS"
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 10190776 fetch
-   Timestamp      Fetch: 1529950375.885816 0.000809 0.000809
-   RespProtocol   HTTP/1.1
-   RespStatus     403
-   RespReason     Forbidden
-   RespHeader     Date: Mon, 25 Jun 2018 18:12:55 GMT
-   RespHeader     Server: Apache
-   RespHeader     Content-Length: 220
-   RespHeader     Content-Type: text/html; charset=iso-8859-1
-   RespHeader     X-Varnish: 10190775
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1529950375.885848 0.000840 0.000031
-   Debug          "RES_MODE 2"
-   RespHeader     Connection: close
-   Timestamp      Resp: 1529950375.885870 0.000863 0.000023
-   Debug          "XXX REF 2"
-   ReqAcct        234 0 234 214 220 434
-   End
        1. repeating this step (clearing varnish, attempt to curl unmaliciously) & tailing the apache logs showed the error
[root@hetzner2 httpd]# tail -f wiki.opensourceecology.org/error_log wiki.opensourceecology.org/access_log | grep -i '_Discount'

[Mon Jun 25 18:21:11.239838 2018] [:error] [pid 2751] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Invalid URL Encoding: Non-hexadecimal digits used at REQUEST_URI. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "461"] [id "950107"] [rev "2"] [msg "URL Encoding Abuse Attack Attempt"] [severity "WARNING"] [ver "OWASP_CRS/2.2.9"] [maturity "6"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/EVASION"] [hostname "wiki.opensourceecology.org"] [uri "/wiki/25%_Discounts"] [unique_id "WzEyl9G0FuNKXv-7ydMIPAAAAAs"]
127.0.0.1 - - [25/Jun/2018:18:21:11 +0000] "GET /wiki/25%25_Discounts HTTP/1.1" 403 220 "-" "curl/7.52.1"
        1. I whitelisted this rule id = 950107, protocol violation. Then I tried again, and it worked!
# ran on the server
[root@hetzner2 sites-enabled]# varnishadm 'ban req.url ~ "_Discounts"'

[root@hetzner2 sites-enabled]# 

# ran on the client
user@ose:~$ curl -I "https://wiki.opensourceecology.org/wiki/25%25_Discounts"
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 25 Jun 2018 19:26:50 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive
X-Content-Type-Options: nosniff
Content-language: en
X-UA-Compatible: IE=Edge
Link: </images/ose-logo.png?be82f>;rel=preload;as=image
Vary: Accept-Encoding,Cookie
Cache-Control: s-maxage=18000, must-revalidate, max-age=0
Last-Modified: Mon, 25 Jun 2018 14:26:50 GMT
X-XSS-Protection: 1; mode=block
X-Varnish: 10084665
Age: 0
Via: 1.1 varnish-v4
Strict-Transport-Security: max-age=15552001
Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report"

user@ose:~$ 

# ran on the server
[root@hetzner2 httpd]# tail -f wiki.opensourceecology.org/error_log wiki.opensourceecology.org/access_log | grep -i '_Discount'

127.0.0.1 - - [25/Jun/2018:19:26:50 +0000] "HEAD /wiki/25%25_Discounts HTTP/1.0" 200 - "-" "curl/7.52.1"
        1. the easiest way to simulate a malicious request is to set the useragent to empty. this worked fine.
ser@ose:~$ curl "https://wiki.opensourceecology.org/wiki/25%25_Discounts" -A ''
<!DOCTYPE HTML PUBLIC "-IETFDTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /wiki/25%_Discounts
on this server.</p>
</body></html>
user@ose:~$ 
        1. I was able to reproduce the issue, proving that varnish was caching the 403 error
# first I clear the varnish cache on the server
[root@hetzner2 sites-enabled]# varnishadm 'ban req.url ~ "_Discounts"'

[root@hetzner2 sites-enabled]# 

# then I hit the server 'maliciously' using an empty user agent from the client
user@ose:~$ curl "https://wiki.opensourceecology.org/wiki/25%25_Discounts" -A ''
<!DOCTYPE HTML PUBLIC "-IETFDTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /wiki/25%_Discounts
on this server.</p>
</body></html>
user@ose:~$

# then I observe the server's varnish logs show that it was a MISS, and it got a 403 from the apache backend
[root@hetzner2 httpd]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 65.49.163.89'"
* << Request  >> 10084940
-   Begin          req 10084939 rxreq
-   Timestamp      Start: 1529956554.678479 0.000000 0.000000
-   Timestamp      Req: 1529956554.678479 0.000000 0.000000
-   ReqStart       127.0.0.1 48158
-   ReqMethod      GET
-   ReqURL         /wiki/25%25_Discounts
-   ReqProtocol    HTTP/1.0
-   ReqHeader      X-Real-IP: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: wiki.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   ExpBan         10359649 banned lookup
-   Debug          "XXXX MISS"
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 10084941 fetch
-   Timestamp      Fetch: 1529956554.679141 0.000662 0.000662
-   RespProtocol   HTTP/1.1
-   RespStatus     403
-   RespReason     Forbidden
-   RespHeader     Date: Mon, 25 Jun 2018 19:55:54 GMT
-   RespHeader     Server: Apache
-   RespHeader     Content-Length: 220
-   RespHeader     Content-Type: text/html; charset=iso-8859-1
-   RespHeader     X-Varnish: 10084940
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1529956554.679173 0.000694 0.000032
-   Debug          "RES_MODE 2"
-   RespHeader     Connection: close
-   Timestamp      Resp: 1529956554.679207 0.000727 0.000034
-   Debug          "XXX REF 2"
-   ReqAcct        209 0 209 214 220 434
-   End

# then I hit the server again, this time unmaliciously (without the empty useragent)
user@ose:~$ curl "https://wiki.opensourceecology.org/wiki/25%25_Discounts" 
<!DOCTYPE HTML PUBLIC "-IETFDTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /wiki/25%_Discounts
on this server.</p>
</body></html>
user@ose:~$ 

# then I observe the logs, showing that the 403 the user got back was indeed cached
[root@hetzner2 httpd]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 65.49.163.89'"
* << Request  >> 10084945
-   Begin          req 10084944 rxreq
-   Timestamp      Start: 1529956570.226469 0.000000 0.000000
-   Timestamp      Req: 1529956570.226469 0.000000 0.000000
-   ReqStart       127.0.0.1 48168
-   ReqMethod      GET
-   ReqURL         /wiki/25%25_Discounts
-   ReqProtocol    HTTP/1.0
-   ReqHeader      X-Real-IP: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: wiki.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      User-Agent: curl/7.52.1
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   Hit            10084941
-   VCL_call       HIT
-   VCL_return     deliver
-   RespProtocol   HTTP/1.1
-   RespStatus     403
-   RespReason     Forbidden
-   RespHeader     Date: Mon, 25 Jun 2018 19:55:54 GMT
-   RespHeader     Server: Apache
-   RespHeader     Content-Length: 220
-   RespHeader     Content-Type: text/html; charset=iso-8859-1
-   RespHeader     X-Varnish: 10084945 10084941
-   RespHeader     Age: 16
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1529956570.226507 0.000039 0.000039
-   Debug          "RES_MODE 2"
-   RespHeader     Connection: close
-   Timestamp      Resp: 1529956570.226526 0.000058 0.000019
-   Debug          "XXX REF 2"
-   ReqAcct        234 0 234 224 220 444
-   End
        1. now that I've proven it to be reproducible with a defined process, I'll attempt to fix the wiki's varnish and see if it now does *not* cache the 403, as desired. the fix includes copying s similar conditional statement from the wordpress configs into the vcl_backend_response subroutine of /etc/varnish/sites-enabled/wiki.opensourceecology.org. Here's the entire new function
sub vcl_backend_response {                                                                                                                                   
																																							 
   if ( beresp.backend.name == "wiki_opensourceecology_org" ){                                                                                               
																																							 
	  # set minimum timeouts to auto-discard stored objects                                                                                                  
	  set beresp.grace = 120s;                                                                                                                               
																																							 
	  if (beresp.ttl < 48h) {                                                                                                                                
		 set beresp.ttl = 48h;                                                                                                                               
	  }                                                                                                                                                      
																																							 
	  # Avoid caching error responses                                                                                                                        
	  if ( beresp.status != 200 && beresp.status != 203 && beresp.status != 300 && beresp.status != 301 && beresp.status != 302 && beresp.status != 304 && beresp.status != 307 && beresp.status != 410 && beresp.status != 404 ) {                                                                                       
		 set beresp.uncacheable = true;                                                                                                                      
		 return (deliver);                                                                                                                                   
	  }                                                                                                                                                      
																																							 
	  if (!beresp.ttl > 0s) {                                                                                                                                
		 set beresp.uncacheable = true;                                                                                                                      
		 return (deliver);                                                                                                                                   
	  }                                                                                                                                                      
																																							 
	  if (beresp.http.Set-Cookie) {                                                                                                                          
		 set beresp.uncacheable = true;                                                                                                                      
		 return (deliver);                                                                                                                                   
	  }                                                                                                                                                      
																																							 
#     if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)") {                                                                                       
#        set beresp.uncacheable = true;                                                                                                                      
#        return (deliver);                                                                                                                                   
#     }                                                                                                                                                      
																																							 
	  if (beresp.http.Authorization && !beresp.http.Cache-Control ~ "public") {                                                                              
		 set beresp.uncacheable = true;                                                                                                                      
		 return (deliver);                                                                                                                                   
	  }                                                                                                                                                      
																																							 
	  return (deliver);                                                                                                                                      
																																							 
   }                                                                                                                                                         
																																							 
}                            
        1. after making the change above, I tested the config's syntax & reloaded the varnish config if there were no errors
[root@hetzner2 sites-enabled]# varnishd -Cf /etc/varnish/default.vcl && service varnish reload
...
Redirecting to /bin/systemctl reload varnish.service
[root@hetzner2 sites-enabled]# 
        1. I repeated the test above, but it resulted in a hit-for-pass being returned by varnish on the second retry instead of just a 403 followed by a miss followed by a hit followed by further hits (which would be ideal). So I changed the subroutine's logic a bit, choosing not to use the 'uncacheable' declaration that I borrowed from the nearby lines of the mediawiki provided config for this subroutine.
sub vcl_backend_response {

		if ( beresp.backend.name == "wiki_opensourceecology_org" ){

				# set minimum timeouts to auto-discard stored objects
				set beresp.grace = 120s;

				if (beresp.ttl < 48h) {
						set beresp.ttl = 48h;
				}       

				# Avoid caching error responses
				if ( beresp.status != 200 && beresp.status != 203 && beresp.status != 300 && beresp.status != 301 && beresp.status != 302 && beresp.status !=
 304 && beresp.status != 307 && beresp.status != 410 && beresp.status != 404 ) {
						set beresp.ttl   = 0s;
						set beresp.grace = 15s;
						return (deliver);
				}

				if (!beresp.ttl > 0s) {
						set beresp.uncacheable = true;
						return (deliver);
				}

				if (beresp.http.Set-Cookie) {
						set beresp.uncacheable = true;
						return (deliver);
				}

#               if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)") {
#                       set beresp.uncacheable = true;
#                       return (deliver);
#               }
 
				if (beresp.http.Authorization && !beresp.http.Cache-Control ~ "public") {
						set beresp.uncacheable = true;
						return (deliver);
				}

				return (deliver);

		}

}
        1. this produced the desired results: 403 (when malicious) then 200 in subsequent non-malicious queries
* << Request  >> 10044374
-   Begin          req 10044373 rxreq
-   Timestamp      Start: 1529958240.758676 0.000000 0.000000
-   Timestamp      Req: 1529958240.758676 0.000000 0.000000
-   ReqStart       127.0.0.1 54452
-   ReqMethod      GET
-   ReqURL         /wiki/25%25_Discounts
-   ReqProtocol    HTTP/1.0
-   ReqHeader      X-Real-IP: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: wiki.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   ExpBan         10043967 banned lookup
-   Debug          "XXXX MISS"
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 10044375 fetch
-   Timestamp      Fetch: 1529958240.759438 0.000762 0.000762
-   RespProtocol   HTTP/1.1
-   RespStatus     403
-   RespReason     Forbidden
-   RespHeader     Date: Mon, 25 Jun 2018 20:24:00 GMT
-   RespHeader     Server: Apache
-   RespHeader     Content-Length: 220
-   RespHeader     Content-Type: text/html; charset=iso-8859-1
-   RespHeader     X-Varnish: 10044374
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1529958240.759455 0.000779 0.000018
-   Debug          "RES_MODE 2"
-   RespHeader     Connection: close
-   Timestamp      Resp: 1529958240.759482 0.000806 0.000027
-   Debug          "XXX REF 2"
-   ReqAcct        209 0 209 214 220 434
-   End

* << Request  >> 10044377
-   Begin          req 10044376 rxreq
-   Timestamp      Start: 1529958244.980716 0.000000 0.000000
-   Timestamp      Req: 1529958244.980716 0.000000 0.000000
-   ReqStart       127.0.0.1 54456
-   ReqMethod      GET
-   ReqURL         /wiki/25%25_Discounts
-   ReqProtocol    HTTP/1.0
-   ReqHeader      X-Real-IP: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: wiki.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      User-Agent: curl/7.52.1
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   Debug          "XXXX MISS"
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 10044378 fetch
-   Timestamp      Fetch: 1529958245.159527 0.178811 0.178811
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Date: Mon, 25 Jun 2018 20:24:04 GMT
-   RespHeader     Server: Apache
-   RespHeader     X-Content-Type-Options: nosniff
-   RespHeader     Content-language: en
-   RespHeader     X-UA-Compatible: IE=Edge
-   RespHeader     Link: </images/ose-logo.png?be82f>;rel=preload;as=image
-   RespHeader     Vary: Accept-Encoding,Cookie
-   RespHeader     Cache-Control: s-maxage=18000, must-revalidate, max-age=0
-   RespHeader     Last-Modified: Mon, 25 Jun 2018 15:24:05 GMT
-   RespHeader     X-XSS-Protection: 1; mode=block
-   RespHeader     Content-Type: text/html; charset=UTF-8
-   RespHeader     X-Varnish: 10044377
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1529958245.159577 0.178861 0.000050
-   Debug          "RES_MODE 4"
-   RespHeader     Connection: close
-   RespHeader     Accept-Ranges: bytes
-   Timestamp      Resp: 1529958245.174460 0.193744 0.014883
-   Debug          "XXX REF 2"
-   ReqAcct        234 0 234 509 14354 14863
-   End

* << Request  >> 10044380
-   Begin          req 10044379 rxreq
-   Timestamp      Start: 1529958250.800502 0.000000 0.000000
-   Timestamp      Req: 1529958250.800502 0.000000 0.000000
-   ReqStart       127.0.0.1 54462
-   ReqMethod      GET
-   ReqURL         /wiki/25%25_Discounts
-   ReqProtocol    HTTP/1.0
-   ReqHeader      X-Real-IP: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: wiki.opensourceecology.org
-   ReqHeader      Connection: close
-   ReqHeader      User-Agent: curl/7.52.1
-   ReqHeader      Accept: */*
-   ReqUnset       X-Forwarded-For: 65.49.163.89
-   ReqHeader      X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   VCL_call       RECV
-   ReqUnset       X-Forwarded-For: 65.49.163.89, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 127.0.0.1
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   Hit            10044378
-   VCL_call       HIT
-   VCL_return     deliver
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Date: Mon, 25 Jun 2018 20:24:04 GMT
-   RespHeader     Server: Apache
-   RespHeader     X-Content-Type-Options: nosniff
-   RespHeader     Content-language: en
-   RespHeader     X-UA-Compatible: IE=Edge
-   RespHeader     Link: </images/ose-logo.png?be82f>;rel=preload;as=image
-   RespHeader     Vary: Accept-Encoding,Cookie
-   RespHeader     Cache-Control: s-maxage=18000, must-revalidate, max-age=0
-   RespHeader     Last-Modified: Mon, 25 Jun 2018 15:24:05 GMT
-   RespHeader     X-XSS-Protection: 1; mode=block
-   RespHeader     Content-Type: text/html; charset=UTF-8
-   RespHeader     X-Varnish: 10044380 10044378
-   RespHeader     Age: 6
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1529958250.800542 0.000040 0.000040
-   RespHeader     Content-Length: 14354
-   Debug          "RES_MODE 2"
-   RespHeader     Connection: close
-   RespHeader     Accept-Ranges: bytes
-   Timestamp      Resp: 1529958250.800574 0.000072 0.000032
-   Debug          "XXX REF 2"
-   ReqAcct        234 0 234 541 14354 14895
-   End
        1. fixed! I checked it out in the "Fetch as Google" and confirmed it worked there too, so I clicked the 'Request indexing' button
      1. https://wiki.opensourceecology.org/wiki/User:Marcin
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      2. https://wiki.opensourceecology.org/wiki/Michael_Log
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      3. https://wiki.opensourceecology.org/wiki/Marcin_Biography/es
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      4. https://wiki.opensourceecology.org/[[2]]
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      5. https://wiki.opensourceecology.org/wiki/CEB_Press_Fabrication_videos
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      6. https://wiki.opensourceecology.org/wiki/Development_Team_Log
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      7. https://wiki.opensourceecology.org/wiki/Tractor_User_Manual
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      8. https://wiki.opensourceecology.org/wiki/Marcin_Biography
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index
      9. https://wiki.opensourceecology.org/wiki/CNC_Machine
        1. I attempted a "Fetch as Google," and I got no issues. I clicked the button to add it to the index, but I got an error = "An error occured. Please try again later.
      10. https://wiki.opensourceecology.org/wiki/OSEmail
        1. I attempted a "Fetch as Google," and I got no issues.
      11. https://wiki.opensourceecology.org/wiki/Donate
        1. I attempted a "Fetch as Google," and I got no issues.
      12. https://wiki.opensourceecology.org/wiki/Hello
        1. I attempted a "Fetch as Google," and I got no issues.
    1. 2 pages were "Indexed, though blocked by robots.txt" (status = warning)
      1. https://wiki.opensourceecology.org/index.php?title=Civilization_Starter_Kit_DVD_v0.01/eo&action=edit&redlink=1
      2. https://wiki.opensourceecology.org/index.php?title=Civilization_Starter_Kit_DVD_v0.01/th&action=edit&redlink=1
        1. I have no idea why google ignored our robots.txt telling it not to index these, but I guess that's out of our hands..
    2. 30299 pages were "Discovered - currently not indexed" (status = excluded)
      1. 48 of these pages were "Excluded by 'noindex' tag", as they should be. here's a few examples
        1. https://wiki.opensourceecology.org/wiki/Special:WhatLinksHere/Civilization_Starter_Kit_DVD_v0.01/es
        2. https://wiki.opensourceecology.org/index.php?title=Governance&action=history
        3. https://wiki.opensourceecology.org/index.php?title=OSE_Platform&action=info
      2. 48 of these pages (same as above, not a typo) were "Blocked by robots.txt", as they should be. here's a few examples:
        1. https://wiki.opensourceecology.org/index.php?title=Pattern_Language&printable=yes
        2. https://wiki.opensourceecology.org/index.php?title=Cost_of_Living&action=info
        3. https://wiki.opensourceecology.org/index.php?title=Talk:Greenhouses&action=edit&redlink=1
      3. 32 of these pages were "Soft 404" I have no idea what that means. Here's a few examples:
        1. https://wiki.opensourceecology.org/wiki/Special:WhatLinksHere/Category:OSEC
        2. https://wiki.opensourceecology.org/wiki/Category:Proposals
        3. https://wiki.opensourceecology.org/index.php?title=File:Proposaltemplate.doc&action=edit
      4. 11 of these pages were "crawl anomoly" I have no idea what this means. Here's a few examples:
        1. https://wiki.opensourceecology.org/load.php?debug=false&lang=en&modules=mediawiki.legacy.commonPrint,shared%7Cmediawiki.sectionAnchor%7Cmediawiki.skinning.interface%7Cskins.vector.styles&only=styles&skin=vector
        2. https://wiki.opensourceecology.org/index.php?title=Template:OrigLang
        3. https://wiki.opensourceecology.org/index.php?title=Main_Page&action=info
      5. 7 of these pages were "Submitted URL not selected as canonical"
        1. https://wiki.opensourceecology.org/wiki/Earth_Sheltered_Greenhouse
        2. https://wiki.opensourceecology.org/wiki/Civilization_Starter_Kit_v0.01
        3. https://wiki.opensourceecology.org/wiki/Walipini
        4. https://wiki.opensourceecology.org/wiki/Solar_Microhut
        5. https://wiki.opensourceecology.org/wiki/Lathe
        6. https://wiki.opensourceecology.org/wiki/Category:Open_source_ecology_community
        7. https://wiki.opensourceecology.org/wiki/Tractor
      6. note that all 7 of the above pages redirect to another page, so it looks like Google made the right choice here.
      7. ~30,300 pages are listed as "Discovered - currently not indexed"
      8. this page defines what this means as "The page was found by Google, but not crawled yet. Typically, Google tried to crawl the URL but the site was overloaded; therefore Google had to reschedule the crawl. This is why the last crawl date is empty on the report." https://support.google.com/webmasters/answer/7440203#discovered__unclear_status
      9. so I guess we just wait for these pages to get crawled
      10. 1,151 pages were "Crawled - currently not indexed"
        1. this page defines what this means as "The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling."
        2. I translate that to: "your pages sucks, and isn't worth storing"
        3. 100% of these URLs end with a date "6/19/18" or "6/18/18" and don't actually exist in our wiki. So that's fine too. For example: https://wiki.opensourceecology.org/wiki/Hydronic_Heat_System,6/19/18
  1. hopefully the 30,300 pages listed as "Discovered - currently not indexed" will trickle into the index at at least roughly 1,000 per week over the next few weeks
  2. I emailed Marcin my findings & extrapolations from the above research
  3. I now see 43 pages of search results from google about our wiki https://www.google.com/search?q=site:wiki.opensourceecology.org
  1. ...
  1. I successfully created a certificate for jangouts.opensourceecology.org using `certbot certonly` on our ec2 instance
[root@ip-172-31-28-115 yum.repos.d]# certbot certonly
...
Input the webroot for jangouts.opensourceecology.org: (Enter 'c' to cancel): /var/www/html/jangouts.opensourceecology.org/htdocs
Waiting for verification...
Resetting dropped connection: acme-v01.api.letsencrypt.org
Cleaning up challenges

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/jangouts.opensourceecology.org/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/jangouts.opensourceecology.org/privkey.pem
   Your cert will expire on 2018-09-23. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

[root@ip-172-31-28-115 yum.repos.d]#
  1. I updated the nginx config with these certs & reloaded it
  2. I confirmed that the site is now working without needing crazy exceptions for the janus gateway api' sport and the frontend 443 port.
  3. I sent Marcin an email with links pointing to the relevant janus gateway demos and jangouts, asking him to test it. Once his tests are done, I'll tear down this centos-based node and rebuild it using debian so I we can do a POC of jitsi. We can then launch an ose jitsi running on ec2 only for a few days out of the year--when we have workshops or extreme bulids & need it to support many consume-only viewers in the way that webrtc won't scale without some self-hosted logic (like scraping & republishing a private jitsi meet into a public stream using jitsi's Jibri https://github.com/jitsi/jibri
    1. we can also automate this build process using ansible. The Freedom of the Press Foundation has already built an ansible role for this https://github.com/freedomofpress/ansible-role-jitsi-meet

Tue Jun 19, 2018

  1. since I stopped 403ing all the useragents with 'bot' in their name yesterday (ie: googlebot), I noticed that we're finally indexed again on Google.com! This now yields 13 pages of results (yesterday it was an entirely empty set) https://www.google.com/search?q=site:wiki.opensourceecology.org
    1. I logged into the Google Webmaster Tools. There's still a lot of pages that say "no data," and an infobox said some data may be a week old. I should just wait a couple weeks.
    2. It did finish processing our sitemap, and noted 32,360 pages were submitted :)
      1. there's a '-' in the index field, which appears that it should hold a date. My guess: they haven't finished indexing our sitemap yet.
    3. finally, the "robots.txt Tester" page shows our robots page, but the fucking page says that our line "Crawl-delay: 15" is *ignored* by Googlebot!
  2. I did some more research on sitemaps & seo. It appears that google may actually not index all your pages, as it will generally stop indexing on 3 or 4 levels deep, especially if the level above it doesn't have a very good rank. Sitemaps may help that.
  3. then I found this link that suggested (with sources) it's actually a myth, I'll abstain from cronifying this https://northcutt.com/wr/[[3]]oogle-ranking-factors/
    1. I spent some time reading through this. Most of my enlightenment came from the "things that harm your rank." For example, I know we have some old linkfarm articles to viagra and whatnot. We should clean that up or google won't be happy with us.
    2. so before the migration, I did find many articles google bombing links to pharmaceuticals, etc. I had found them with Recent Changes, but at the time we didn't have Nuke = Mass Delete. So I decided just to hold off until after the migration. Well, now that the migration is complete, I can't find the old spam articles! It would be nice if there was a better search feature that could pinpoint the spam. Google mostly provides advice on how to prevent spam. We got the prevention down, we just need to find the old stuff :\
    3. recent changes only goes back 30 days, and this was months ago *shrug*
    4. I blocked 1 user and their spammy article = Johngraham
    5. another option is to go through our external links, but this is an insurmountable task https://wiki.opensourceecology.org/index.php?title=Special:LinkSearch&limit=500&offset=1500&target=http%3A%2F%2F*
    6. also useful is this category: candidates for speedy deletion. Not sure how it's populated though https://wiki.opensourceecology.org/wiki/Category:Candidates_for_speedy_deletion

Mon Jun 18, 2018

  1. reset Germán's password (user = Cabeza_de_Pomelo) & sent it to them via email
  2. Marcin mentioned that he's getting much worse search results from google for our wiki post-migration
    1. This is odd; I'd expect changing to https to increase our ranking
    2. Indeed, simply doing a search for our site yields literally *no* results. It looks almost like they banned us? https://www.google.com/search?q=site%3Awiki.opensourceecology.org
    3. I registered for the "Google Webmaster Tools" & validated our site, which included adding a nonce html verification file to the docroot. Currently there is no data; hopefully we'll get some info from this in a few days or weeks. https://www.google.com/webmasters/tools/home
    4. I added catarina & marcin (by email address) as users to the Google Webmaster Tools with full permissions
    5. I confirmed that google has *not* made any "manual actions" against our site (ie: banning us due to spam or sth) https://www.google.com/webmasters/tools/manual-action
    6. I confirmed that we have "0" URLs indexed by google and that "0" were blocked by robots.txt, so this can't be a robots.txt issue either.. https://www.google.com/webmasters/tools/index-status
    7. one of the things that the Goolge Webmaster Tools asks for in many places is a sitemap. I did some research on many online tools for generating an xml sitemap to upload to google. And local linux command line tools for generating a sitemap. Then, I discovered that mediawiki has a built-in sitemap generator :) https://www.mediawiki.org/wiki/Manual:GenerateSitemap.php
pushd /var/www/html/wiki.opensourceecology.org/htdocs
mkdir sitemap
pushd mkdir
time nice php /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/generateSitemap.php --fspath=/var/www/html/wiki.opensourceecology.org/htdocs/sitemap/ --urlpath=https://wiki.opensourceecology.org/sitemap/ --server=https://wiki.opensourceecology.org
    1. the generation took less than 3 seconds. that beats hours for wget!
    2. the new sitemap was made public here https://wiki.opensourceecology.org/sitemap/sitemap-index-osewiki_db-wiki_.xml
    3. I was able to hit this sitemap fine, but google complained that it got a 403. Interesting..
    4. there were no modsecurity alerts, but I think it was an nginx best-practice I picked-up from here https://www.tecmint.com/nginx-web-server-security-hardening-and-performance-tips/
    5. I literally had a line in the main nginx file that said this may need adjustment if it impacts seo
[root@hetzner2 httpd]# grep -C 2 -i seo /etc/nginx/nginx.conf
   server_tokens off;

   # block some bot's useragents (may need to remove some, if impacts SEO)
   include /etc/nginx/blockuseragents.rules;

[root@hetzner2 httpd]# 
    1. so I uncommented the match for 'bot' in this list & reloaded nginx
[root@hetzner2 httpd]# cat /etc/nginx/blockuseragents.rules 
map $http_user_agent $blockedagent {
		default         0;
		~*malicious     1;
#        ~*bot           1;
		~*backdoor      1;
		~*crawler       1;
		~*bandit        1;
}
[root@hetzner2 httpd]# 
    1. that fixed it! It's now processing our sitemap..
    1. Marcin specifically asked if I had removed any SEO related extensions. I check the list of deprecated extensions that I removed, and nothing looks like it was SEO-related https://wiki.opensourceecology.org/wiki/Maltfield_Log/2018_Q1#Mon_Jan_22.2C_2018
    2. I did find a google-related extension that I removed = 'google-coop' (it was listed as 'archived' and stated not to work with current Medaiwiki versions). But this extensions appears to be related to Google Custom Search and not SEO https://www.mediawiki.org/wiki/Extension:Google_Custom_Search_Engine
    3. It may be worthwhile to *add* an SEO extension, however. At least, it would be good to add some meta keywords to our pages.
    4. on first look, this appears to be the best seo extension. it's actively maintained, has keyword/description tag, facebook, and twitter support https://www.mediawiki.org/wiki/Extension:WikiSEO
  1. I spent some time documenting wikipedia's expenses. They have hundreds of servers (maybe > 1,000?) and claim to own $13 million in hardware, then spend another 2.1 million on bandwidth/rack space/etc.
  2. I spent some time documenting our mediawiki extensions list. Note that 3 of the extensions we use (OATHAuth, Replace Text, and Renameuser) are all included in the current stable version of mediawiki v1.31--which of course came out the month after our major migration/upgrade https://wiki.opensourceecology.org/wiki/Mediawiki#Extensions
  3. I spent some time filling-in the documentation on how to update mediawiki

Fri Jun 15, 2018

  1. finished documenting my notes from attempting to build the OSE adapted D3D extruder from the prusa i3 mk2 https://wiki.opensourceecology.org/wiki/D3D_Extruder#2018-06_Notes
  2. went to install certbot on our cent7 ec2 instance running janus/jangouts, but I got dependency issues with 'python-zope-interface'. The fix was actually to enable an amazon repo, as this is apparently an issue in ec2 described here https://github.com/certbot/certbot/issues/3544
yum -y install yum-utils
yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional
    1. note that the command is literally 'REGION' and not to be substituted for the region you're in
  1. I combined two articles that documented batch image resizing https://wiki.opensourceecology.org/wiki/Batch_Resize
    1. this one is now a redirect https://wiki.opensourceecology.org/index.php?title=Batch_Image_Resize_in_Ubuntu&redirect=no
  2. I documented why we have a 1M cap on image uploads. I don't think we should change that limit until we commit to hiring a full-time sysadmin with a budget of >=$200k/year for IT expenses https://wiki.opensourceecology.org/wiki/Mediawiki#.24maxUploadSize

Thr Jun 14, 2018

  1. Tom said he could ssh in but couldn't sudo because he didn't remember his password. I reset his password & sent it to him via email, asking for confirmation again.
  2. Marcin mentioned that a new user registered on the wiki & noted that our Terms of Service are blank upon registering
    1. I fixed a link to openfarmtech to be wiki.opensourceecology.org here https://wiki.opensourceecology.org/wiki/Open_Source_Ecology:General_disclaimer
    2. this exists, though it's just a link https://wiki.opensourceecology.org/wiki/Open_Source_Ecology:Privacy_policy
    3. It appears that the issue is this page https://wiki.opensourceecology.org/wiki/Special:RequestAccount
    4. the above page links to a non-existent https://wiki.opensourceecology.org/wiki/Open_Source_Ecology:Terms_of_Service
    5. I read through the documentation of the ConfirmAccounts extension https://www.mediawiki.org/wiki/Extension:ConfirmAccount
    6. We currently don't have many customizations set for this extension, here's what we have in LocalSettings.php
require_once "$IP/extensions/ConfirmAccount/ConfirmAccount.php";                                          
$wgFileStore['accountreqs'] = "$IP/images/ConfirmAccount";                                                
$wgFileStore['accountcreds'] = "$IP/images/ConfirmAccount";                                               
$wgConfirmAccountContact = 'marcin@opensourceecology.org';
    1. all other defaults and options are best referenced by the sample file at /var/www/html/wiki.opensourceecology.org/htdocs/extensions/ConfirmAccount/ConfirmAccount.config.php
    2. for example, I see that we *are* creating user pages from their bio
    3. we *are* creating a talk page for the user welcoming them to our wiki. This can be customized here https://wiki.opensourceecology.org/wiki/MediaWiki:Confirmaccount-welc
    4. the bio has a minwords = 50
    5. I sent Marcin an email with my findings
  1. finished editing the video of the d3d workshop & documenting the day's build
    1. https://wiki.opensourceecology.org/wiki/D3D_Workshop_2018-06-02
    2. https://www.youtube.com/watch?v=YjVa8ROUMEk
    3. https://archive.org/details/ose_3dprinterBuild_20180602

Wed Jun 06, 2018

  1. I finished documenting our trials in building a square for the d3d frame out of strips of metal bonded together with epoxy https://wiki.opensourceecology.org/wiki/D3D_frame_built_with_epoxy
  2. Emailed Chris about Persistence
  3. Tom responded that he has been using linux for 25 years and is a Red Hat Certified Engineer since 2001. I added him to the 'wheel' group, so he should now have sudo rights. I am just waiting for his confirmation (he may need to reset his password).
  4. I emailed Marcin about the risks of granting him root, mentioning that it's atypical for a CEO or Executive Director to have root access onto the machines.
  5. fired-up kdenlive in attempt to mix a worthwhile video out of our stitched-together video of timelapse photos
    1. first, I wanted to find a worthwhile open source soundtrack. I wanted to use "Alien Orange Lab" by My Free Mickey at ccmixter.org http://ccmixter.org/files/myfreemickey/45683
      1. unfortunately, this was NC under CC. We probably want to use something that's commercial, so we can use it in a video, for example, advertising for our workshops (an advertisement)
    2. I did some more digging & documented a few tracks that allow commercial use here https://wiki.opensourceecology.org/wiki/Open_Source_Soundtracks#CCMixter
    3. I think I'll use "Welcome in the intox" by Bluemillenium http://ccmixter.org/files/Bluemillenium/57202

Tue Jun 05, 2018

  1. Marcin sent me an email stating he was having issues logging into osemain due to 2FA issues. I logged in successfully & responded telling him to make sure the time on his phone was exact (up to the second) and retry.
  2. Updated my log & hours
  3. Fixed some wiki links around the OSE Developer leaderboard to make it less buried
  4. Added a wiki article about the tree protectors https://wiki.opensourceecology.org/wiki/Tree_protector
  5. Meeting agenda notes
    1. Hazelnut tree protectors. 66 laid over 10 hours.
    2. built jellybox
    3. used jellybox to print components & assemble Prusa i3 mk2 extruder assembly
    4. d3d workshop + time-lapse documentation
  6. weekly meeting
  7. created stubs where I'll spend the rest of my week documenting my work the past 2 weeks
    1. https://wiki.opensourceecology.org/wiki/D3D_frame_built_with_epoxy
    2. https://wiki.opensourceecology.org/wiki/D3D_Extruder#2018-06_Notes

Mon Jun 04, 2018

  1. responded to Tom & Chris emails
  2. dumped photos off phone from the D3D build

Sat Jun 02, 2018

  1. today I helped Marcin build the D3D printer in a workshop at a homestead near Lawrenece, Kansas. I made a few notes about what went wrong & how we can improve this machine for future iteratons
    1. The bolts for the carriage & end pieces (the ones that are 3d printed halves & sandwiched by the bolts/nuts to sandwich the long metal axis rods) are inaccessible under the motor. So if there's an issue, you have to take off the motor's 4 small bolts to access the end piece's 4 bolts. This happened to me when I used the same sized bolts in all 4x holes. In fact, one or two of the bolts should go through the metal frame of the cube. So I had to remove the motor to replace that bolt. Indeed, this also had to be done on a few of the other 5 axis' as well, slowing down the build. It would be great if we could alter this piece by moving the bolts further from the motor, including making the pieces larger, if necessary.
    2. One new addition to this design was the LCD screen, and a piece of plexiglass to which we zip-tied all our electronics, then mounted the plexiglass to the frame via magnets. The issue: When we arranged the electronics on the plexiglass, we were mostly concerned about wire location & lengths. We did not consider that the LCD module (which has an SD card slot on its left side) would need to be accessible. Instead, it was butted against the heated bed relay, making the SD card slot inaccessible. In future builds, we should ensure that this component has nothing to it's left-side, so that the SD card slot remains accessible.
  2. we used my phone + Opern Camera to capture a time-lapse of it

Fri Jun 01, 2018

  1. spent another couple hours installing tree protectos for the hazelnut trees. This is my last day, and the total count of protectors I've installed is 66. I spent a total of about ~10 hours putting these 66 protectors up..
  2. I fixed a modsecurity false-positive
    1. 950911 generic attack, http response splitting
  3. spent some time researching steel wire wrapping in consideration for if it could be used in a plastic, 3d printed composite http://www.appropedia.org/Open_Source_Multi-Head_3D_Printer_for_Polymer-Metal_Composite_Component_Manufacturing
  4. spent some time researching carbon fiber in consideration for if it could be used in a plastic, 3d printed composite
  5. spent some time researching bamboo fiber in consideration for if it could be used in a plastic, 3d printed composite
    1. further research is necessary on methods for mechanical retting of bamboo without chemcials

Thr May 31, 2018

  1. spent another couple hours installing tree protectors for the hazelnut trees after Marcin gave me more wire from the workshop yesterday. 'Current count for hazelnut trees protected is 54.'
    1. The field is quite large, but I estimate that I'm 25-40% through the keylines. That means, as of 2018-05 (or about 1 year after they were planted), there's approximately a few hundred hazelnut trees planted along keylines at FeF. If those survived last winter (and they can get proper protection from rabbits), then they'll probably stick around. Even though the numbers are far less than the numbers planted, it's hard to complain about having _only_ a few 'hundred' hazelnut trees!
  2. I spent a couple hours in the shop attempting to build a square side (ie: for our D3D printer) out of cut metal strips joined with quick-set epoxy. Logs
  3. I finished assembling one of the extrudes!
    1. some of these m3 bolts are 18 mm long, some are 25 mm, one is 20mm. It would be great if we could make them all 20mm.
    2. some of these bolts are supposed to just go straight into the plastic. That doesn't work well. The bolts don't go in & usually just fall out
    3. and the spots where we're supposed to insert a nut into a hole & push it to the back doesn't work so well. On contrast, the nut catches (where you slide the nut in sideways, rather than pushing it back into the hole) work *great*. If possible, we should use a nut catcher on all bolts. Or just have a washer/nut outside the structure entirely, if that's possible.

Wed May 30, 2018

  1. spent some time installing tree protectors for the hazelnut trees before running out of wire
  2. spent some time researching git file size limits. There are no limits, but they asked to keep the repo under 1 G. File sizes are caped at 100M, which is great. They also have another service called "Git LFS" = Large File Storage. LFS is for files 100M-2G in size. It also does store versions. This is provided for free up to 1G (so the 2G limit isn't free!) https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/
  3. The git page pointed me to a backup solution whoose website had a useful comparison of costs for their competitors' services https://www.arqbackup.com/features/
  4. Right now I'm estimating that we're paying ~$100/year for a combination of s3 + glacier storage of ~1 TB. We pay-what-we-use, but the process of using it is so damn complicated (and slow for Glacier!) that if we could pay <$100 for 1T elsewhere, it's worth considering
    1. Microsoft OneDrive is listed as $7/mo for 1T
    2. Backblaze B2 is listed as $5/mo for 1T
      1. this is probably the cheapest option, and is worth noting for future reference
    3. Google Coldline is listed as $7/mo for 1T
  5. spent some time documenting our server's resource usage now that I have the data to determine what we actually need following the wiki migration. The result: our server is heavily overprovisioned. We could get two hetzner cloud nodes (one for prod, one for dev), and still save 100 EUR/year. https://wiki.opensourceecology.org/wiki/OSE_Server#Looking_Forward
  6. spent some time curating our wiki IT info
  7. fixed mod_security false-positive
    1. 981320 SQLI
  8. imported Marcin's new gpg key into the ossec keyring
mkdir -p /var/tmp/gpg
pushd /var/tmp/gpg
# write multi-line to file for documentation copy & paste
cat << EOF > /var/tmp/gpg/marcin.pubkey2.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQINBFsDbMMBEACypyMZ/J9+M1DvNd+EGhIpRXEKH5WldOXlZtJAh1tGH5cvqBwR
QDCCyVAA+WsiE0IQJByrpxPbj25ypPSMcyhJYmmDOa/0R/NdVuBgJNmWFSyfB/aU
dKAC3brLMC8zUffieug0bVE6vI8QE/DUAGKU5AyNFOD3itFGgI7HtlaknU9ql7um
VxrOM7VU/GmqZcg5hqno6r1mhiG9boitM10lSav+Hylv3Es01pLUvy/NlJEZ10lZ
rQ8RHIQSTpxj9C9L32DjvcJ8BfIHzr6aY/xv5tbPDJuLgsPgn6EoUZkNQAyPMV8J
8MT26UmwlA0WvMkHJze+kgsXD5FUk7MuZM5ttEHKsngN5Sim1M+dBnUtg6QG4zpf
KhyVOOpag1L3iyCwGMbRIX8cTk2Hk39Csf37QKDUrHMbDqAOcQzpr6YcbEO/PPXW
u2VQDJfuiWrgQI7v+ac8uAlH66c6MmEqtsduxVmUYK1C7LlDmcswa4kOP/5WkpJ8
kFwicIM/qpZgewpjtD+ATADs0knA+D+MBQSoMI6FhCLLytz2JpIEtHJFDvDuV/7Q
Yi+RDyFqNr+i7rkNe/xpb5lzrLutN7JEYeMn+LsPH6Ucd8mGJ7j88c0OZUidkOu5
KErG4xHqee87B+Et0/LfEABogDAPnqH027tCMXHu8g2Ih8kZnglEnNeP6wARAQAB
tDBNYXJjaW4gSmFrdWJvd3NraSA8bWFyY2luQG9wZW5zb3VyY2VlY29sb2d5Lm9y
Zz6JAj0EEwEIACcFAlsDbMMCGwMFCRLMAwAFCwkIBwIGFQgJCgsCBBYCAwECHgEC
F4AACgkQ186EWruNpsEEGg//f195qc3hJcyon9Rq+tH7yp8hJN+Pcy3WBnj0Amvg
fPYGR1W5qbCnd9NPdcAz8J1H1Hsbz9+zYDlhIp71iuTlNvtT821du9bLwqplN9UI
YNkRAYm/kwd2qAYNPdVKW0lY9OhvyZA5XrjyQQxVtzQmuB0kTrzX1Br6ZWnMNavd
X2yhfbxJY71HbETMw/VLBubbl8RwpZGzXqye23Il8SryicDk9oIXF6uExB4Ym7PJ
3+h4Sn9dvAQOEsjl57r/ZHNctb4VLqJfVo12ba2XxTx0TGrdYGbiONHu9P2u+Zwj
+NlGmKq2+h2V4pdfl5buj90NtdV3GjJ6wBiBZ0sAO0tKIAp1PWP+Ayo9ep2G8H4A
R8WMJZ6VaXw54C2gLlyzwrsZztqBWljfL8tHtCyOKjN1YJuucn2pzEz/ENTOC3cn
SNzBTXSi/fJBaBgbueMtDE1j0VWjfcm+zIkMfcjUoN+w7gQGEQGc/myvDnEIevcy
ITlejx2MnCj3cjkKrOUXct+3pJwWuxFFfWtOUF91cgAd+FrVw7kQSNfS7T4X7jVO
frVpAXthQaSJIDas5ZqnBlkCdkF+4Oj8IbpV0RUHNIOy0XXJqb6Z3YVUjQdT+Dup
4wmz6dlNdNWfP0iyo6OOuphz+Tz9ZkPDfLXznR+tz62PB/oeHxE0S/zWDXTeyqWp
RyWJAhwEEwEKAAYFAlsDgiEACgkQ/huESU5kDUH+1g//ZoS0E9R6pKfvVBTnuphW
gmCuAgGXAxdMioCYYNqn+jGyy6XdDKcsVATJT0pwctMhkAxEajafzaoBC1/pellh
vO3c7088/BMzRJYSTHeAANd2qctK00ZZZ149T41TedfGaYOEJSNWyjXAZeOM8dlb
qLRkFVf2Zo4rG6ij55ywLS7Cqv1TBMwWzx70gl0TnPxBhBj48Mr/JnhYRQVZtm5c
MiaTncwGJky2CCEXTJqYGT9wDe74w1GGZz5Png59rs6m/1mibdtQ1YbF9gX5pBoK
afpPVRLSISNKyB8PUVNf/2Uqckl1JQ95rcsgTqArcLWeBV4fIm18SfKglYRg2I2u
EP4Fz3oLHROQ6aTPzQgfRX7ZFI7w7lEwOSwQTgC0qjH+y+5a7/H/+wuXtfnuHBsu
nJikH2MzmccRdUQGNtZLJ5HBVpglV3OAMWbknmGOSWdPPaeD68hhOJlfaq51HA8/
ewav9VDPADL/GBy9zSadWRYLCbmkPaksvYdP0exndeLr/GMNsO/jsI/BBgbtG5EM
qc71SEJDjOe+T1/NuoPQiQwaHXgUNgB6/F33sByKPu56M2T+gctpQHg2dw6U7LAK
biE8Q3pCoIzz+2/AZd/+vpdzZ71qahBiOMmrGTJfkqWDar8DP+bXHLYDZBYpExPg
MB+w06S7CsNzrmhBiuysm++JAhwEEwEKAAYFAlsDgj4ACgkQqj7fcWDi2XvKphAA
j/H/atXb2fyN/VJ3tPQ0qsmv3ctDpMnazCwRksTZHzFhZdyi6mu8zlE+iK9SGr5L
PTc+jSK02JnuAQcnZHMNrov6wPPAaoRFDQ7Nv9LUmzVJPnxXuoFxF1akkr0cdxpZ
4nfcCIZS0i43RLWSKuFFz81Oy4Med8U9JXq/NxYw/a5D7PZ7flSSUDYrQwgOQtut
lCebOPb/iu6A87HJ+bhtQb7G7G68HkFmlATnjA0AmeM/+PQ8AR6YH5mbgQeWmPTq
XJdfBs5+AFyUw1zJPa5GPBa+96tqCjOrkxrwR/FCe1L2Q+BfkBRDDg2FA6/pekG4
kzAB++JH3Uai6PSgmifUDMsA++4oRGf7ALqoXnXwu4SOQ2vlrsPjAnV77us5JvdI
Wc346uzvcJAyFOmBuQqRKOOsgYpEj1Q5HKkDuZNLM8e89o0dTOwcm4e8BR00GN6+
OyC6D8U8T72kFv3WvW5HqiP5mmGZDBNWLaXFjLJBSUrFVw9OJWuisSbX6JoISE4Y
RFhzS/REKLn7LDvVvByI3wZF6GLbfKkdzZHoK0Fc4GFiVloDOC7iGiHV+cw2Ivwx
yhsdRciuH5yRnbNhekaNNFddcmq2K6QPLgbDIBX43eFmArRk/mLwyMyvhVQT1NuL
NqudMTihZeO10A4evHqHDmiYIi0cRf9OKct0S7bSwJm5Ag0EWwNswwEQAMcuLBNf
/iTsBnvrI7cD2S24pVGMowaPDWMD1PEfwdL7dHDA4hTnrJexXHxGTFLiKgwhTdCr
ZnBUNmL1CjoN2nO02MlFPcDNsPAa03KSF/IIpx1v/Y7yYN3eJX1nthQ3rPJnguEe
L7mgBYtGeKBBdTWGzfHYDYI8IaUP6Bhfc6Yj+a5NVh+NsObhX0IMoa/lQNLDlfav
tqdDgi7tMuf/Qyz1VvgpYYzXDq9KdipWssCHEDnIggdlJGemQyQMGuAil1TOC+S8
9D/IbOuo3Wa+YMIu7g6cX0jX8Lp0kBH6yNlmIXvvOzV8smOVwemTl8Lt/9hETJqx
aXL9j3DCoYVA87MAGcBD3EMFjQKwVLIWe84B8i5G44yD2DCHBNL/Qeq09klI5T5M
BAgYbNoKx130pf0jGD6dzdfDiMgclAuhz5VTkNh5RCu7rdVgHGQKm5f6sVXCuAfl
/f3Wv66lyCIHbb+LAxnG07bPHLGgHtrS+xRp7d+y7ezaTSmzcOs8lb1C6D/tJXyV
+64lgkTsLid3ljVsMMCRWdRyXYWMOPAt9krFIW6niYHokN5m5uB/l/Vad+PYJ8WA
Agpord+A2vSLliogO1BiDX5lcZmlFPSDDAlr5373KGoBSoYIXq6xcqsvkg3F4RCW
B5YEWgBiX9roXzZ7oMUUK7uhDixFMqAWmN+dABEBAAGJAiUEGAEIAA8FAlsDbMMC
GwwFCRLMAwAACgkQ186EWruNpsEHSw//dXXtuO5V6M+OZ5ArMj1vFudU57PNT+35
5prq6IIDCeRiTanpjIR3GuOGtK3D+4r6Nk1lCoG0CwFPUu7k51gsdkB9DRrRYKX5
fXkl8UC+e8dKo9bMS3jyY9nC7Mv1DPc4gx7VoZeXsxlqz60tEG3HWehLGt03z47C
5I9VVLkTvxt73VH9BHcZaScyPfn3kOlbBSW6U/6ZnRJQ6pc6xPxMsqo0OznYgU9k
YpkS6xwjqT7MYCw4DiW5kSIqNBRMl3suLUUvsJH4OOjilIt4Su+GxftrokmayRYr
XRP0k/Tnf7nrjPl7znbCFxEEVSezaQE2rxQCiKXkmvYzaPjJXZmPgz49oih24Tgn
Llk70qRoRXt2MkZG3TH/t755ORYl5BUeyhnPSzOD/1BiFJze7N+r5mGtJsdjBSyO
LEdjVzsLRhKvheDkrsbguiV8wjaHdfpdPUdYHnWs/HZ7e9HyGoGxaYPRzYosqTu5
pxgIs4c3Toy7nYQjINd/IhLCYL7UBT+ybNMzh15u63UYun37x4mbdkkx7TzZpXex
cnP2bJijq/TJD8PRJNY9GFd5fnluk6xpaFH1YAtQbe/YpTHP0xn45Hi91tsv7S7F
Tl5+BGflBcIQOF80tOHetUrtH3cjp/dtKCE5ZU5Vt9pxlvQeO+azOH1jXQ35vs2t
7VMKgjAEf/c=
=nvDm
-----END PGP PUBLIC KEY BLOCK-----
EOF
gpg --homedir /var/ossec/.gnupg --delete-key marcin
gpg --homedir /var/ossec/.gnupg --import /var/tmp/gpg/marcin.pubkey2.asc
popd
  1. confirmed that the right key was there
[root@hetzner2 gpg]# gpg --homedir /var/ossec/.gnupg --list-keys
gpg: WARNING: unsafe ownership on homedir `/var/ossec/.gnupg'
/var/ossec/.gnupg/pubring.gpg
-----------------------------
pub   4096R/60E2D97B 2017-06-20 [expires: 2027-06-18]
uid                  Michael Altfield <michael@opensourceecology.org>
sub   4096R/9FAD6BEF 2017-06-20 [expires: 2027-06-18]

pub   4096R/4E640D41 2017-09-30 [expires: 2018-10-01]
uid                  Michael Altfield <michael@michaelaltfield.net>
uid                  Michael Altfield <vt6t5up@mail.ru>
sub   4096R/745DD5CF 2017-09-30 [expires: 2018-09-30]

pub   4096R/BB8DA6C1 2018-05-22 [expires: 2028-05-19]
uid                  Marcin Jakubowski <marcin@opensourceecology.org>
sub   4096R/36939DE8 2018-05-22 [expires: 2028-05-19]
  1. documented this on a new page named Ossec https://wiki.opensourceecology.org/wiki/Ossec
  2. I began building the Prusa i3 mk2 extruder assembly
    1. I have never done this before, but I do have the freecad file https://wiki.opensourceecology.org/wiki/File:Prusa_i3_mk2_extruder_adapted.fcstd
    2. I uploaded the 3 3d printables that I exported from the above freecad file. I had previously exported these stl files so I could import hem into Cura & print them on the Jellybox. Now that I have 2x of each of the 3x pieces, I can begin the build with the hardware (springs, extruder, fans, bolts, nuts, washers, etc) that Marcin gave me (he ordered it from McMaster-Carr
      1. idler

https://wiki.opensourceecology.org/wiki/File:Prusa_i3_mk2_extruder_adapted_idler.stl

      1. cover https://wiki.opensourceecology.org/wiki/File:Prusa_i3_mk2_extruder_adapted_cover.stl
      2. body https://wiki.opensourceecology.org/wiki/File:Prusa_i3_mk2_extruder_adapted_body.stl
    1. there appears to be some piece (shaft?) (labeled "5x16SH") in the body of the shaft bearing that I don't have
    2. I may have installed the fan backwards; it's hard to tell which way this thing will blow, but the cad shows it should blow in, towards the heatsink
    3. I also appear to need 2x more 3d printed pieces:
      1. the "fan nozzle" for the print fan
      2. the print fan
      3. the interface
      4. the motor
      5. the proximity sensor
    4. I also had some issues inserting a nut into the the following holes
      1. NUT_THIN_M012 into the back of the body, which receives the SCHS_M3_25 from the cover
    5. I extracted stl files for the fan nozzle and a small cylinder for the shaft of the bearing. These have been uploaded to the wiki as well https://wiki.opensourceecology.org/wiki/D3D_Extruder#CAM_.283D_Print_Files_of_Modified_Prisa_i3_MK2.29
    6. The interface needed to be 3d printed too, but it totally lacked holes. They were sketched, but they didn't go through the "interface plate". I spent a few hours in freecad trying to sketch it on the face of the plate & push it through, but it only went partially through the interface plate (I made the length 999mm, reversed, & "full length nothing helped). Marcin took the file, copied the interface plate

Tue May 29, 2018

  1. did some hazelnut fence work in the morning
  2. fixed some mod_security false-positives while attempting to upload my log
    1. 950010 generic attack
    2. 973310 xss
  3. I presented my video conference research & wiki migration doings at today's meeting
  4. I began to 3d print the Prusa i3mk2 components
    1. I'm working on the JellyBox that I built last week
    2. I installed freecad on my debain 9 system (I've changed distros since my dev exam)
    3. In freecad, I used spacebar to isolate the 3 distinct components (idler, body, & cover) & export to stl
    4. I downloaded the AppImage of Cura 3.3.1, (https://ultimaker.com/en/products/ultimaker-cura-software/platform/3) made it executable, & fired it up. It allowed me to simply choose the IMade3d Jellybox config. The Imad3D instructions for Cura linked to a dead link, an old version of Cura, and the website's resources only provide a dmg (I'm using Linux, not a Mac) My best guess is that it *used* to need resources, but now it's built-into recent versions of Cura
    5. I opened the stl file that freecad spat out into Cura. I just rotated the object so the largest flat surface was facing down, then told Cura to split it (clicked "Prepare"), and saved it as a g-code file onto the sd card
    6. I took the sd card, threw it into the sd card/lcd display on the Jellybox, browsed to it, and clicked "Print"
    7. the first layer didn't stick, so I made the z-index less negative (moved from approx -22 to -9). That helped in subsequent prints
    8. I noticed that the fans weren't spinning!
    9. Marcin & I spent some time troubleshooting the fans. I had the wrong cable plugged into the wrong fans. One set (the heatsink for the extruder itself) should be plugged in all the time & spinning when the system is on. The other fan (blowing out on the cooling filament) should be turned on-and-off by the controller
    10. after adjusting the z-height (I needed -0.70!), I finally got a good stick & printed a piece. I set 2 more to print at the same time, so the 3d printed extruder parts should be printed by morning
  5. I checked the server's munin graphs again. The monthly is showing a nice, obvious change from before/after the wiki migration. The current state is all our sites being served over https and cached by varnish on a single server = hetzner2
    1. Obviously, the network graphs show a great increase in traffic after the migration

Munin monthlyGraphAfterWikiMigration network.png

    1. The system graphs show a _slight_ increase in CPU, load, & swap.

Munin monthlyGraphAfterWikiMigration system.png

    1. The varnish graphs show an obvious increase in backend traffic
    2. The varnish graphs show a sliver of orange hit-for-pass appear. This should be our logged-in wiki users. The cache hits still stay wonderfully high, which is why the server's cpu/load is barely impacted by this great inflow of traffic.
    3. The varnish graphs show a definite (but small) uptick in RAM usage
    4. The varnish graphs show a great spike in the number of objects, and then we see it tapering off finally after ~ quadrupling.

Munin monthlyGraphAfterWikiMigration varnish.png

Mon May 28, 2018

  1. further emails with Christian
  2. helped wrap some short welded wire "fencing" around small hazelnut trees to protect from rabbits in the field by the first microhouse at FeF
  3. discovered that my log article was so damn large that Mediawiki was refusing to serve it with a 413 error
Request Entity Too Large
The requested resource
/index.php
does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. 
    1. so I segregated my log into subpages per quarter https://en.wikipedia.org/wiki/Wikipedia:Subpages

Sun May 27, 2018

  1. Helped Christian get his local wiki instance operational

Sat May 26, 2018

  1. Emailed with Christian about making an offline version of the wiki for browsing in kiwix like wikipedia & other popular wikis https://wiki.kiwix.org/wiki/Content
    1. we may have to install Parsoid and/or the VIsualEditor extension https://www.howtoforge.com/tutorial/how-to-install-visualeditor-for-mediawiki-on-centos-7/
    2. but I asked Christain to first look into methods that do not require Parsoid, such as zimmer http://www.openzim.org/wiki/Build_your_ZIM_file#zimmer
    3. Christian hit some 403 forbidden messages when hitting the mediawiki api, so I attempted to whitelist his ip address from all mod_security rules
   # disable mod_security with rules as needed                                                                                                                                                
   # (found by logs in: /var/log/httpd/modsec_audit.log)                                                                                                                                      
   <IfModule security2_module>                                                                                                                                                                
		 SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 960915 200003 981173 981318 981260 950911 973302 973324 973317 981255 958057 958056 973327 950018 950001 958008 973329 950907 950910 950005 950006 959151 958976 950007 959070 950908 981250 981241 981252 981256 981249 981251 973336 958006 958049 958051 973305 973314 973331 973330 973348 981276 959071 973337 958018 958407 958039 973303 973315 973346 973321 960035                                                                                          
																																															  
		 # set the (sans file) POST size limit to 1M (default is 128K)                                                                                                                        
		 SecRequestBodyNoFilesLimit 1000000                                                                                                                                                   
																																															  
		 # whitelist an entire IP that we use for scraping mediawiki to produce                                                                                                               
		 # kiwix-ready zim files for archival & offline viewing                                                                                                                               
		 SecRule REQUEST_HEADERS:X-Forwarded-For "@Contains 176.56.237.113" phase:1,nolog,allow,pass,ctl:ruleEngine=off,id:1                                                                  
   </IfModule>                                                                                                                                                                                
  1. In testing, I accidentally banned myself. When validating, I saw that our server had banned 2 other IPs, which are crawlers. I went to their site, and found that they obey robots.txt's "Crawl-delay" option http://mj12bot.com/
[root@hetzner2 httpd]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
DROP       all  --  crawl1.bl.semrush.com  anywhere            
DROP       all  --  crawl-vfyrb9.mj12bot.com  anywhere            
DROP       all  --  184-157-49-133.dyn.centurytel.net  anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere            
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:http
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:https
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:pharos
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:krb524
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:32415
LOG        all  --  anywhere             anywhere             limit: avg 5/min burst 5 LOG level debug prefix "iptables IN denied: "
DROP       all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DROP       all  --  crawl1.bl.semrush.com  anywhere            
DROP       all  --  crawl-vfyrb9.mj12bot.com  anywhere            
DROP       all  --  184-157-49-133.dyn.centurytel.net  anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
ACCEPT     udp  --  anywhere             ns1-coloc.hetzner.de  udp dpt:domain
ACCEPT     udp  --  anywhere             ns2-coloc.hetzner.net  udp dpt:domain
ACCEPT     udp  --  anywhere             ns3-coloc.hetzner.com  udp dpt:domain
LOG        all  --  anywhere             anywhere             limit: avg 5/min burst 5 LOG level debug prefix "iptables OUT denied: "
DROP       tcp  --  anywhere             anywhere             owner UID match apache
DROP       tcp  --  anywhere             anywhere             owner UID match mysql
DROP       tcp  --  anywhere             anywhere             owner UID match varnish
DROP       tcp  --  anywhere             anywhere             owner UID match hitch
DROP       tcp  --  anywhere             anywhere             owner UID match nginx
[root@hetzner2 httpd]# 
  1. so I created a robots.txt file per https://www.mediawiki.org/wiki/Manual:Robots.txt
cat << EOF > /var/www/html/wiki.opensourceecology.org/htdocs/robots.txt
User-agent: *
Disallow: /index.php?
Disallow: /index.php/Help
Disallow: /index.php/Special:
Disallow: /index.php/Template
Disallow: /wiki/Help
Disallow: /wiki/Special:
Disallow: /wiki/Template
Crawl-delay: 15
EOF
chown not-apache:apache /var/www/html/wiki.opensourceecology.org/htdocs/robots.txt
chmod 0040 /var/www/html/wiki.opensourceecology.org/htdocs/robots.txt
  1. I reset the password for 'Zeitgeist_C.Real_-' & sent them an email
  2. did some back-of-the-envelope calculations for crypto currency mining with our wiki as piratebay does
    1. coinhive says 30 hashes/sec is reasonable. https://coinhive.com/info/faq
    2. yesterday we got 31,453 hits. much of that is probably spiders. Unfortunately, awstats doesn't give unique visitors per day, only per month. But we've basically been online for only one day, and the monthly says 2,422 unique visitors this month. So let's say we get 2,000 visitors per month. 81.4% of our traffic is <=30s long. Average visit is 208s (probably some people leave it open in the background for a very long time (= all our devs)). Let's be conservative & say that 100-81.4=18.6% of our daily users = 2000* 0.18 = 360 users are on the site for 30 seconds. That's 10800 seconds of mining
    3. Coinhive pays out in 0.000064 XMR per 1 million hashes. We'll be generating 10800 seconds/day * 30 hashes/s = 324,000 hashes per day. 324,000 * 30 = 9,720,000 hashes per month. 9720000/1000000 = 9.72 * 0.000064 xmr = 0.00062208 xmr / month.
    4. At today's exchange rate, that's $0.10 per month. Fuck.
    5. ^ that said, my calculations are extremely conservative. If we actually have 2,422 unique visitors spending an average of 208 seconds on the site, then it's 2422 * 208 = 503776 seconds per day. 503776 * 30 hashes/s = 15113280 hashes per day. 15113280 * 30 days = 453398400 hashes per month. 453398400/1000000 = 453.3984 * 0.000064 xmr = 0.029017498 xmr / month.
    6. At today's exchange rate, that's $4.85/month.
  3. So if we cryptomine on our wiki users, we're looking at between $0.10 - $5 per month profit. Meh.
  4. fixed mod_security false-positive
    1. whitelisted 981247 = SQLI
  5. I created a snashot of the wiki for Christian to build a local copy for zim-ifying it for kiwix
# DECLARE VARS
snapshotDestDir='/var/tmp/snapshotOfWikiForChris.20180526'
wikiDbName='osewiki_db'
wikiDbUser='osewiki_user'
wikiDbPass='CHANGEME'

stamp=`date +%Y%m%d_%T`

mkdir -p "${snapshotDestDir}"
pushd "${snapshotDestDir}"
time nice mysqldump -u"${wikiDbUser}" -p"${wikiDbPass}" --databases "${wikiDbName}" > "${wikiDbName}.${stamp}.sql"# DECLARE VARS
snapshotDestDir='/var/tmp/snapshotOfWikiForChris.20180526'

mkdir -p "${snapshotDestDir}"
pushd "${snapshotDestDir}"
time nice mysqldump --single-transaction -u"${wikiDbUser}" -p"${wikiDbPass}" --databases "${wikiDbName}" | gzip -c > "${wikiDbName}.${stamp}.sql.gz"
time nice tar -czvf "${snapshotDestDir}/wiki.opensourceecology.org.vhost.${stamp}.tar.gz" /var/www/html/wiki.opensourceecology.org/*
  1. I drove to town to pickup some plexiglass and epoxy for the 3d printer
  2. I worked on Marin's computer for a bit, updated the OSE Marlin github to include compartmentalized Configration.h files for distinct D3D flavors, and added a new one with the LCD. I got the LCD connected & working. Then the SD card connected & working
  3. When we went to print from the SD card, the print nozzle was too high. We went back to Marlin to print, and the same thing happened.
  4. We need to fix the z probe (replace it?) before we can proceed further...

Fri May 25, 2018

  1. Waking up the day after the migration and, hooray, nothing is broken!
  2. I got a few emails from people saying thank you, a few asking me to delete their account from the wiki, and one from someone asking for help reseting their password.
  3. I confirmed that the self-password-reset function worked for me (it sent me an email with a temporary password, I used it, logged-in, then reset my password)
  4. I hopped over to awstats & munin. Finally, awstats is getting some good data.

Awstats morningAfterWikiMigration.png AwstatsPages morningAfterWikiMigration.png

  1. Munin looks good too

Munin morningAfterWikiMigration1.gif Munin morningAfterWikiMigration2.gif

    1. obviously there's a jump in the graphs from after the wiki (our most popular site) piped more traffic to hetzner2. Most notably is the number of connections
    2. CPU & Load aren't much of a change. But if we didn't have varnish, that certainly wouldn't be the case.
    3. there's some minor changes to the varnish hit rate graph. A tiny sliver of orange hit-for-pass appeared at the top of the graph--that's probably our wiki users that are logged-in. But the hit rate still seems >90%, which is awesome! The dip in the hit rate was from me manually giving varnish a restart--surprisingly it didn't dip much, and it quickly returned to >90% hit rate!
    4. the number of objects in the varnish cache nearly doubled. I would have expected it to more than double, but maybe with time..
  1. I emailed screencaps of awstats & munin to Marcin & CCd Catarina
  2. I confirmed that the backup on hetzner1 finished, and that it included the wiki
osemain@dedi978:~$ bash -c 'source /usr/home/osemain/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh hetzner1/*'
12G     hetzner1/20180501-052002
259M    hetzner1/20180502-052001
0       hetzner1/20180520-052001
12G     hetzner1/20180521-052001
12G     hetzner1/20180522-052001
12G     hetzner1/20180523-052001
12G     hetzner1/20180524-052001
481M    hetzner1/20180524-141649
12G     hetzner1/20180524-233533
12G     hetzner1/20180525-052001
  1. I moved the now-static wiki files from the old server into the noBackup dir, so that we won't have those being copied to dreamhost nightly anymore (the backup size should drop from 12G to 0.5G).
osemain@dedi978:~$ mkdir noBackup/deleteMeIn2019/osewiki_olddocroot
osemain@dedi978:~$ mv public_html/w noBackup/deleteMeIn2019/osewiki_olddocroot/
osemain@dedi978:~$ 
  1. the backups on hetnzer2 aren't growing like crazy, so that bug appears to have been fixed
osemain@dedi978:~$ bash -c 'source /usr/home/osemain/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh hetzner2/*'
15G     hetzner2/20180501-072001
43G     hetzner2/20180524_072001
15G     hetzner2/20180525_001249
15G     hetzner2/20180525_072001
osemain@dedi978:~$ 
  1. total disk usage on dreamhost is 172G; hopefully that won't trigger their action on us again. Now that the wiki is migrated, many other things that were blocked can begin to move--one of which is that we can move our backups onto s3, which will now be ~15G daily it seems.
osemain@dedi978:~$ bash -c 'source /usr/home/osemain/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh .'
172G    .
osemain@dedi978:~$ 
  1. we still didn't get a green lock on firefox for ssl because some content was https. I changed all the links on our Main_Page at least to be https where possible, but there was another culprit: our Creative Commons image in the bottom-right of every page. I changed this in LocalSettings.php from 'http://mirrors.creativecommons.org/presskit/buttons/88x31/png/by-sa.png' to 'https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by-sa.png'. After clearing the varnish cache with `varnishadm 'ban req.url ~ "."'`, we got the green lock on firefox on our main page!
  2. I sent Ahmed (Shahat.log) a new password that I manually set for him
  3. for the 2x users who asked for their accounts to be deleted: unfortunately, I deleting mediawiki users isn't supported https://meta.wikimedia.org/wiki/MediaWiki_FAQ#How_do_I_delete_a_user_from_my_list_of_users.3F
    1. instead, I offered them to rename their account and/or change their email address
  4. I documented how to "block" users by changing their email address, renaming their account, and banning their account
  5. now that the wiki stuff is stable, I finally got a chance to merge-in the images I found on the 'oseblog'
    1. Emailed Marcin about this & discussed about using relative links
  6. documented TODO for OSE Linux needing encrypted persistence https://wiki.opensourceecology.org/wiki/OSE_Linux#Persistence
    1. emailed Christian asking him to look into adding encrypted persistence into OSE Linux, providing links for how TAILS does it
  7. Marcin noticed that we had anonymous edits permitted. I looked in the LocalSettings.php file & found this
$wgGroupPermissions['*']['edit'] = true;
    1. I commented that out, replacing it with the line
$wgGroupPermissions['*']['edit'] = false;
#$wgGroupPermissions['*']['edit'] = true;
    1. I'm not sure why that didn't permit anon edits before, but it's fixed now!
  1. I did some further cleanup of the "wiki locking" section of LocalSettings.php. This included commenting-out the line that permitted '*' users from creating accounts. It's misleading.
    1. before
################                                                                                                                                                                              
# WIKI LOCKING #                                                                                                                                                                              
################                                                                                                                                                                              
																																															  
# uncomment this to put a banner message on every page leading up to a                                                                                                                        
# maintenance window                                                                                                                                                                          
#$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki will be temporarily made READ-ONLY during a maintenance window today at 15:00 UTC. Please finish & save any pending changes to avoid data loss.</div>';                                                                          
#$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki is currently undergoing maintenance and is therefore in an ephemeral state. Any changes made to this wiki will be lost! Please check back tomorrow.</div>';                                                                      
																																															  
# temp wiki locks                                                                                                                                                                             
$wgGroupPermissions['*']['edit'] = false;                                                                                                                                                     
#$wgGroupPermissions['*']['edit'] = true;                                                                                                                                                     
$wgGroupPermissions['*']['createaccount'] = true;                                                                                                                                             
#$wgGroupPermissions['*']['createaccount'] = false; ## Account creation disabled by TLG 06/18/2015                                                                                            
$WgGroupPermissions['user']['edit'] = true;                                                                                                                                                   
$wgGroupPermissions['sysop']['createaccount'] = true;                                                                                                                                         
#$wgGroupPermissions['sysop']['createaccount'] = false; ## Account creation disabled by TLG 06/18/2015                                                                                        
$wgGroupPermissions['sysop']['edit'] = true;                                                                                                                                                  
																																															  
$wgReadOnlyFile="$IP/read-only-message";                                                                                                                                                      
    1. after
################                                                                                                                                                                              # WIKI LOCKING #                                                                                                                                                                              
################                                                                                                                                                                                                                                                                                                                                                                            
# uncomment this to put a banner message on every page leading up to a                                                                                                                        
# maintenance window                                                                                                                                                                          
#$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki will be temporarily made READ-ONLY during a maintenance window today at 15:00 UTC. Please finish & save any pending changes to avoid data loss.</div>';                                                                          
#$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki is currently undergoing maintenance and is therefore in an ephemeral state. Any changes made to this wiki will be lost! Please check back tomorrow.</div>';                                                                      
																																															  
# only registered users can edit                                                                                                                                                              
$wgGroupPermissions['*']['edit'] = false;                                                                                                                                                     
$WgGroupPermissions['user']['edit'] = true;                                                                                                                                                   
$wgGroupPermissions['sysop']['edit'] = true;                                                                                                                                                  
																																															  
# only sysop users can create accounts                                                                                                                                                        
$wgGroupPermissions['*']['createaccount'] = false;                                                                                                                                            
$wgGroupPermissions['sysop']['createaccount'] = true;                                                                                                                                         
																																															  
$wgReadOnlyFile="$IP/read-only-message"; 
  1. Marcin asked me to add 'zip' to the list of possible uploads to the wiki. Done.


  1. I returned to work on the jellybox
    1. started at 14:20. Paused at 15:00. minus 30 min for helping Marcin with git
    2. somehow I skipped the part where the back is installed. Maybe it was absent or I was absent minded. Any way, after I installed the so-called "last" acrylic piece on the top & saw that the back was already attached, I just snapped in the back at that point
    3. somehow I made it to the end without installing the proximity sensor. I went back to this doc & did it https://docs.imade3d.com/Guide/Assemble+the+X+Assembly/147
    4. the info on the checklist after the build was *extremely* lacking, so I just went off guide & started poking it at this point https://docs.imade3d.com/Wiki/Easy_Kit_Flow#Section_Checkpoint_It_s_Alive
      1. I stuck in the sd card, navigated to some thing, and told it to print
      2. I threw the filament spool where it looked like it went, clipped off the end, stuck it in where it looked like it belonged, and clipped it shut
    5. It heated up, moved around, and then immediately ran horizontally with the extruder tip slamming into the bed. I quickly turned off the big friendly button (power switch)!
    6. the guide was wrong for setting the "x homing offset". It should be "Settings -> Motion -> X home offset", but it said simply "Settings > X Homing Offset". I found it after poking around the menu for a bit https://docs.imade3d.com/Guide/%E2%86%B3+X+Homing+Offset/213
      1. there was no y homing offset option
    7. I finished my first print at 18:19. That's a 10.5 hour build time.
    8. I did some polishing of the first layer z offset to prevent it from looking like spagetti on that first layer
    9. I thought my next print was good, such as the fish skeleton, but Marcin pointed out that the fish's joints should actually be free to move. we broke the joints a bit & got motion, but apparently it should be printed so precise that we shouldn't have to break it
    10. I tightened the z motor. they were still loose from my initial build, where the video said to leave it loose. Subsequent prints were better. I got the gears to spin, but--again--only after some breaking
    11. now prints are sliding around on the surface before the print is finished. I'll have to fix that tomorrow

Thr May 24, 2018

  1. today is the big day! I began the wiki migration at ~14:00 UTC
  2. first step: I moved the wiki to a new home outside the docroot, effectively turning the existing wiki off (and making the state of it immutable)
osemain@dedi978:~$ date
Thu May 24 16:07:07 CEST 2018
osemain@dedi978:~$ pwd
/usr/home/osemain
osemain@dedi978:~$ mkdir ~/wiki_olddocroot
osemain@dedi978:~$ mv public_html/w ~/wiki_olddocroot
osemain@dedi978:~$ 
  1. I created a simple html file indicating that we're down for maintenance where the old site was.
osemain@dedi978:~$ mkdir public_html/w
osemain@dedi978:~$ echo 'This wiki is currently down for maintenance. Please check back tomorrow.' > public_html/w/index.html
osemain@dedi978:~$ 
  1. now that the wiki can't change mid-backup, I initiated the backup process. This will should our last backup of the site pre-upgrade. Note that, after this completes, I'll need to move the '~/wiki_olddocroot' dir into '~/noBackup/deleteMeIn2019/wiki_olddocroot' else our 18G immutably version of the oold wiki will still be backed-up daily superciliously
osemain@dedi978:~$ date
Thu May 24 16:16:45 CEST 2018
osemain@dedi978:~$ # STEP 0: CREATE BACKUPS
osemain@dedi978:~$ source /usr/home/osemain/backups/backup.settings
osemain@dedi978:~$ /usr/home/osemain/backups/backup.sh
================================================================================
Beginning Backup Run on 20180524-141649
...
  1. I know from experience that the hetzner1 backups take an absurdly long time to complete, but hetnzer2 is only a couple hours. I probably won't wait for both to finish. I will kick off the one on hetzner2 and kick off the migration of the data to hetzner2 at the same time. Then, after the hetzner2 backup finishes, I'll tear down the ephemeral wiki & replace it with the migrated data.
  2. First, I start the backups on hetzner2
root@hetzner2 ~]# # STEP 0: CREATE BACKUPS
[root@hetzner2 ~]# # for good measure, trigger a backup of the entire system's database & files:
[root@hetzner2 ~]# time /bin/nice /root/backups/backup.sh &>> /var/log/backups/backup.log
  1. Then I kicked-off the file/data transfer from hetzner1 to hetnzer2. Note that I had to modify some of these vars because I moved the files outside the docroot as the only way to stop writes to the db + files (hetzner1 is a shared hosing server, so I can't mess with the vhost configs or bounce the httpd service, etc one of the reasons we're migrating to hetzner2!)
# DECLARE VARIABLES
source /usr/home/osemain/backups/backup.settings
stamp=`date +%Y%m%d`
backupDir_hetzner1="/usr/home/osemain/noBackup/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}"
backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2"
backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz"
#vhostDir_hetzner1='/usr/www/users/osemain/w'
vhostDir_hetzner1='/usr/home/osemain/wiki_olddocroot/w'
dbName_hetzner1='osewiki'
 dbUser_hetzner1="${mysqlUser_wiki}"
 dbPass_hetzner1="${mysqlPass_wiki}"

# STEP 1: BACKUP DB
mkdir -p ${backupDir_hetzner1}/{current,old}
pushd ${backupDir_hetzner1}/current/
mv ${backupDir_hetzner1}/current/* ${backupDir_hetzner1}/old/
time nice mysqldump -u"${dbUser_hetzner1}" -p"${dbPass_hetzner1}" --all-databases --single-transaction | bzip2 -c > ${backupDir_hetzner1}/current/${backupFileName_db_hetzner1}

# STEP 2: BACKUP FILES
time nice tar -czvf ${backupDir_hetzner1}/current/${backupFileName_files_hetzner1} ${vhostDir_hetzner1}
  1. while those 3 backups ran, I logged into our cloudflare account
    1. Unrelated, but after I logged-into our cloudflare account, I reset the password to a new 100-char random password. I stored this to our shared keepass.
    2. in the dns tab, I disabled CDN/DDOS protection for the 'opensourceecology.org' domain. That's the last one that had that enabled. So, after this migration, we'll be able to either cancel our cloudlfare account (just go with the free version) or migrate our DNS back to dreamhost. iirc, cloudflare provides some free services, and they're actually a pretty good low-latency dns provider. So maybe we keep them as a free nameserver. Either way, let's stop paying them!
    3. The new site is going to be 'wiki.opensourceecology.org', which is already pointed to hetnzer2. Only after we've finished validating this new wiki will I make 'opensourceecology.org' point to hetzner2. Then I'll probably just setup some 301 redirect with modrewrite to point all 'opensourceecology.org/w/*' requests to 'wiki.opensourceecology.org/'.
  2. I added a message to the top of our 'wiki.opensourceecology.org' site to warn users that they shouldn't make changes until after the migration is complete, lest they loose their work!
$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki is currently undergoing maintenance and is therefore in an ephemeral state. Any changes made to this wiki will be lost! Please check back tomorrow.</div>';    
  1. the backup run on hetzner2 finished creating the encrypted archive + the hetnzer1 backup of the wiki files/data finished at both the same time. The backup process on hetzner2 was still rsyncing to dreamhost, but that's fine. I'm going to proceed with bringing up the new wiki on hetzner2.
  2. Marcin tried to login, but it failed 9because his password was too short)

Incorrect username or password entered. Please try again.

  1. the only outstanding issue with the wiki is that Marcin isn't getting an email when someone requests an account on the wiki. Note that we did confirm that we can approve them from the wiki, then the user does get an email with their temp password after the account is created. It's just that the initial email notifying Marcin that someone submitted a request to create an account isn't coming through.
  2. we decided that this isn't a blocker; we're going to mark the migration as a success, and I'll look into the emails out at a later date
  3. next, I need to setup redirects, email our most active users that they should reset their passwords, and mark all user' passwords as expired
    1. I tested that I could force my test user to have their password reset upon next login. note that this field is normally 'NULL' for our users
UPDATE wiki_user SET user_password_expires = '19990101000000' WHERE user_name = 'Maltfield2Test';
  1. Marcin found an issue with modsecurty blocking an ini file, which apparently some "initialization" file type related to 3d printing. It's also a config file, so mod_security blocked it
    1. fixed by whitelisting rule id 960035
    2. I also enabled uploading of 'ini' files by adding it to the $wgFileExtensions array in LocalSettings.php
  2. I updated our 'opensourceecology.org' dns entry to point to our new server = '138.201.84.243'
  3. before I reset everyone's password, I'm going to kick off a backup. I confirmed that this morning's backup from before the migration finished before I kicked off the post-migration backup.
[root@hetzner2 ~]# bash -c 'source /root/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh hetzner2/*'
15G     hetzner2/20180501-072001
51G     hetzner2/20180520_072001
53G     hetzner2/20180521_072001
50G     hetzner2/20180522_072001
56G     hetzner2/20180523_072001
43G     hetzner2/20180524_072001
57G     hetzner2/20180524_142031
[root@hetzner2 ~]# 
  1. I also confirmed that our last good backup of the site on hetzner1 before migration was finished too, but it's too small--it must have omitted our wiki!
[root@hetzner2 ~]# bash -c 'source /root/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh hetzner2/*'
15G     hetzner2/20180501-072001
51G     hetzner2/20180520_072001
53G     hetzner2/20180521_072001
50G     hetzner2/20180522_072001
56G     hetzner2/20180523_072001
43G     hetzner2/20180524_072001
57G     hetzner2/20180524_142031
[root@hetzner2 ~]# 
  1. I decided to run the hetzner1 backup again, but I put the docroot back in-place. Nothing is pointed at hetzner1, so we shouldn't actually have any split-brain issues. But to be safe, I created a file to lock the db.
osemain@dedi978:~$ date
Fri May 25 01:32:32 CEST 2018
osemain@dedi978:~$ pwd
/usr/home/osemain
osemain@dedi978:~$ ls public_html/w
AdminSettings.sample  favicon.ico                  INSTALL                   oft                    read-only-message.20170814.bak  serialized            tomg.php
api.php               favicon.png                  install-utils.inc         old.txt                read-only-message.20170819.bak  site-config           tom_merge.php
api.php5              files                        jsduck.json               oldusers.html          read-only-message.20170826      sitemap.xml           trackback.php
bin                   googledf1b3694f1a60c17.html  languages                 oldusers.txt           read-only-message.20180301      sitemap.xml.gz        trackback.php5
cache                 Gruntfile.js                 load.php                  opensearch_desc.php    redirect.php                    skins                 Update.php.html
composer.json         HISTORY                      load.php5                 opensearch_desc.php5   redirect.php5                   StartProfiler.php     UPGRADE
config                htaccess-site.txt            LocalSettings.php         ose-logo.png           redirect.phtml                  StartProfiler.sample  wiki.phtml
COPYING               images                       LocalSettings.php.update  OSE_Proposal_2008.pdf  RELEASE-NOTES                   support.html          wp-content
CREDITS               img_auth.php                 log.tmp                   php5.php5              RELEASE-NOTES-1.20              test.php
docs                  img_auth.php5                maintenance               profileinfo.php        RELEASE-NOTES-1.23              tests
error_log             includes                     Makefile                  profileinfo.php5       RELEASE-NOTES-1.23.1            thumb_handler.php
export                index.php                    math                      README                 RELEASE-NOTES-1.24              thumb_handler.php5
extensions            index.php5                   mathjax                   README.mediawiki       resources                       thumb.php
FAQ                   index-site.php               mw-config                 read-only-message      robots-site.txt                 thumb.php5
osemain@dedi978:~$ cat public_html/w/read-only-message
this site is offline. Please see our new site at https://wiki.opensourceecology.org
osemain@dedi978:~$ 
  1. I updated our let's encrypt certificate to include the naked domain 'opensourceecology.org'
certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/www.opensourceecology.org/htdocs/ -d opensourceecology.org  -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org  -w /var/www/html/staging.opensourceecology.org/htdocs -d staging.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d munin.opensourceecology.org
/bin/chmod 0400 /etc/letsencrypt/archive/*/pri*
nginx -t && service nginx reload
  1. then I enabled redirects from our naked domain (which now should point to the blog at 'www') back to the wiki when someone uses the old style domain. It matches for anything starting with '/wiki/'. Note that I had to encapsulate the last return in a location block as well to make it work for some reason (otherwise it always redirects to 'www')
[root@hetzner2 conf.d]# date
Thu May 24 23:49:21 UTC 2018
[root@hetzner2 conf.d]# pwd
/etc/nginx/conf.d
[root@hetzner2 conf.d]# head -n 50 www.opensourceecology.org.conf 
################################################################################
# File:    www.opensourceecology.org.conf
# Version: 0.2
# Purpose: Internet-listening web server for truncating https, basic DOS
#          protection, and passing to varnish cache (varnish then passes to
#          apache)
# Author:  Michael Altfield <michael@opensourceecology.org>
# Created: 2018-01-03
# Updated: 2018-05-24
################################################################################

server {
		# redirect the naked domain to 'www'
		#log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
   #                   '$status $body_bytes_sent "$http_referer" '
   #                   '"$http_user_agent" "$http_x_forwarded_for"';
		#access_log /var/log/nginx/www.opensourceecology.org/access.log main;
		#error_log /var/log/nginx/www.opensourceecology.org/error.log main;
   include conf.d/secure.include;
   include conf.d/ssl.opensourceecology.org.include;
   listen 138.201.84.243:443;
		server_name opensourceecology.org;

		####################
		# REDIRECT TO WIKI #
		####################

		# this is for backwards-compatibility; before 2018-05-24, both the wiki and
		# this site shared the same domain-name. So, just in case someone sends
		# opensourceecology.org/wiki/ a query trying to find the wiki, let's send them
		# to the right place..

		location ~* '^/wiki/' {

				return 301 https://wiki.opensourceecology.org$uri;

		}


		# this must be wrapped in a dumb location block or else the above block
		# does not work *shrug*

		location / {
				return 301 https://www.opensourceecology.org$uri;
		}

}

server {

[root@hetzner2 conf.d]# 
  1. I encountered some issues with the backup process filling up the disk. I believe the issue started when changed the script to encrypt our backups. I had a logic error that prevented our backups from excluding the day before's backups. So it grew fast! Our dreamhost was already approaching 500G again, so I quickly deleted all but the most important recent bacukups from hetzner2
    1. before
hancock% du -sh hetzner1
72G	hetzner1
hancock% du -sh hetzner2
321G	hetzner2
    1. after
hancock% du -sh hetzner1
72G	hetzner1
hancock% du -sh hetzner2
108G	hetzner2
hancock% 
  1. Thet hetzner2 backup completed.
  2. reset set the expiration token of everyone's passwords in the past, effectively forcing them to change their passwords on next login attempt
[root@hetzner2 wiki_20180524]# mysql -u "${dbSuperUser_hetzner2}" -p"${dbSuperPass_hetzner2}" osewiki_db
...
MariaDB [osewiki_db]> UPDATE wiki_user SET user_password_expires = '19990101000000';
Query OK, 1913 rows affected (0.03 sec)
Rows matched: 1914  Changed: 1913  Warnings: 0

MariaDB [osewiki_db]> Bye
[root@hetzner2 wiki_20180524]#
  1. I then sent an email to every user with >5 edits (~500 people)

Hello,

For security reasons, it's imperative that you change your password on the Open Source Ecology wiki. If you re-used your wiki password on any other websites, you should change those accounts' passwords as well.

We just completed a major update to the OSE wiki. For more information on these changes, please see:

* https://wiki.opensourceecology.org/wiki/CHG-2018-05-22

As you can see from the above link, our wiki is located at a new location. And note that it's now using https, so your connection to our wiki is encrypted (before we were using http).

Any communications sent over http (as opposed to https) can be trivially intercepted by a third party. This includes passwords. Therefore, you should assume that any passwords you used at any OSE website before now has been compromised, and those passwords should be retired & replaced wherever you used that password (on OSE sites or elsewhere).

In addition to enabling https, we've made many updates to the wiki. If you have any technical issues with our wiki, please don't hesitate to contact me.


Thank you,

Michael Altfield Senior System Administrator PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7 70D2 AA3E DF71 60E2 D97B

Open Source Ecology www.opensourceecology.org

  1. while Marcin worked on wiki validation, I returned to work on the jellybox 3d printer build
    1. I started ~12:30. Stopped at 16:00
    2. the orientation of the idler bearings was not specified in the video. Probably someone with more mechanical experience wouldn't fuck this up, but I did. I realized my mistake when putting together the second idler. The correct orientation was still not specified, but the video happened to show it properly.
    3. note that the jelly box doesn't include a multimeter
    4. I did find HD photos on their site that do a great job at providing pictoral detail. Now that I think about it, they should probably just splice these into the video, either PIP style in the corner or 1-second overlays. I did a lot of pause-work-play-pause-work action for the videos. And sometimes I had to switch tabs to the documentation online with the HD photos. If I could just pause the 1-second video with a nice HD, annotated shot that would be very helpful.
    5. the install for the usb cable didn't tell me I needed a nut. Perhaps the usb cable end used to have this built-in, but I definitely needed to add a nut to mine. Also it the bolt wasn't long enough to add a nut, so I had to use an m3x16 instead of the m3x12. This will probably make me short 2x m3x16s. Hopefully I have enough spares..

Tue May 23, 2018

  1. continued working on fixing the items that Marcin marked as "not working" following the formal wiki validation of the staging site pre-migration
    1. Marcin said the page "GVCS Tools Status" wouldn't save. I could not reproduce; perhaps they were fixed by the mod_security rules I whitelisted when fixing "Flashy XM" https://wiki.opensourceecology.org/wiki/GVCS_Tools_Status
    2. Marcin pointed out that the flash embed of a TED talk on the DPV page was missing https://wiki.opensourceecology.org/wiki/Dedicated_Project_Visits
      1. Flash is dead. And Ted doesn't offer this content over https. I asked Marcin if this is the same TED talk that we show with an iframe to vimeo on our main page If so, that's our best fix.
    3. Marcin has the same issue on the "Open_Source_Ecology" page. It's also a flash ted, but there's also soe strange cooliris.swf. Even the old link is down. I asked marcin if it should be removed entirely https://wiki.opensourceecology.org/wiki/Open_Source_Ecology
    4. Marcin pointed out that iframes don't work...but they clearly did for the above vimeo embeds. I asked for clarification.
    5. Marcin mentioned an issue with the Miracle Orchard Workshop Book section w/ issuu. But it works fine for me. I'll ask for clarification. https://wiki.opensourceecology.org/wiki/Miracle_Orchard_Workshop#Book
    6. Marcin mentioned that there was an issue embedding eventzilla, but I don't see an example of this failing. I asked for clairificiation.
    7. Marcin pointed out issues with uploading many file format, such as '.fcstd' freecad files. The test was to download this existing file on the wiki, then try to reupload it. It failed with a mime error https://wiki.opensourceecology.org/wiki/File:Peg_8mm_rods.fcstd
File extension ".fcstd" does not match the detected MIME type of the file (application/zip).
      1. I read through this document, which describes how mediawiki detects & handles mime types of files https://www.mediawiki.org/wiki/Manual:MIME_type_detection
      2. for us, the relevant config files on hetzner1 are:
        1. /usr/home/osemain/public_html/w/includes/mime.types
        2. /usr/home/osemain/public_html/w/includes/mime.info
      3. and they changed paths for hetzner2 (probably just because it's a newer version of Mediawiki):
        1. /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/mime/mime.types
        2. /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/mime/mime.info
      4. I couldn't find any reference to fcstd files in either file on hetzner 1. My best guess is that mime type checking wasn't as strict on that older version of mediawiki
osemain@dedi978:~/public_html/w/includes$ grep 'fcstd' mime.*
osemain@dedi978:~/public_html/w/includes$ 
      1. I checked the mime format of the file in question on hetzner2, and--yes--I got 'application/zip'
[root@hetzner2 mime]# file --mime-type ../../../images/3/32/Peg_8mm_rods.fcstd 
../../../images/3/32/Peg_8mm_rods.fcstd: application/zip
      1. I added a line 'application/zip fcstd' to hetzner2:/var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/mime/mime.types
        1. Note that there's already a line for this mime type. I confirmed that adding this additional line compliments (as opposed to replaces) the existing line by uploading an actual '.zip' file. Note that 'zip' isn't in our '$wgFileExtensions'--I had to temporarily add it.
      2. I then also had to add 'fcstd' to the $wgFileExtensions array in LocalSettings.php. Note that 'FCStd' was already there, but there's a case difference. That sucks that this file extension list is case-sensitive :(
      3. I made similar changes & confirmed that I could upload the following file formats as well
        1. stf
        2. rtf
        3. csv
        4. xml
        5. dxf
    1. Marcin & I met at noon to discuss this migration. It looks like most of the issues above are trivial fixes or I already fixed them. We did attempt to register & approve a new user. There seems to me some issues with email mediawiki. But it could just be exceedingly slow. Not sure. In any case, we were able to request an account. Marcin approved the new request & created the account (there's an additional step as approve is redirected to the admin-only "Create Account" Special Page.
    2. We also accidentally validated that new accounts must be >10 characters & strong. Marcin was driving, and his test user used a short password. Mediawiki refused the password with a nice error that said passwords must be >=10 characters. I told him to just use '1234567890' for the test, which was also rejected because it was a common password (nice!). On our third try, it went through--with a sufficiently long & non-common password.
  1. Marcin & I met again. he said that everything is working except xml files. His example is being blocked by the wiki's built-in script blocker. Upon consideration, Marcin said xml files aren't a requirement; we can just nowiki them.
  2. We had some intermittent issues with email coming from the server to the new user. Our solution was just to set $wgEmailConfirmToEdit to 'false'. So now someone can edit pages without having to click the confirmation email sent from our server. The risk here is that someone can use a bogus email, but they still have to be approved by Marcin. Not a big issue.
  3. We decided to push back the migration to tomorrow, starting ~9am CT.
  4. Marcin & I went to add spare steel roofing on top of the old workshop's carpet sandwich roof
  5. Marcin & I spent some time repairing the micro track
  6. Based on a conversation Marcin & I had earlier about my experimentation building ball mills to grind aluminum down to a fine powder in my youth + the utility that plays for 3d printing via laser sintering, I did some research on this & added some comments to the Open Source Ball Mill page

Mon May 22, 2018

  1. created a page for the imade3d jellbox 3d printer on the wiki Imade3d
  2. created a page for documenting my experience building the Jellybox 1.3 3d printer Jellybox_1.3_Build_2018-05
  3. I emailed Meetecho asking for a quote for a webrtc SaaS solution for a 2-day workshop

Hello,

We're looking for a solution that will allow us to have ~ 1-12 remote speakers present a workshop to ~10-100 remote viewers.

Traditional P2P WebRTC doesn't scale for our needs, so we were looking at a SFU. In order to run a Janus gateway ourselves, we'd need to purchase an additional server. But we only do workshops like this a few times per year, so we can't justify the purchase of a 24/7 server.

Could you please provide us with a quote for how much it would cost for Meetecho to provide us with a platform (hosted by Meetecho) where we could have several remote speakers presenting to about a hundred participants? It would be essential that the participants can ask real-time questions by typing. It would be a huge plus if they could digitally "raise their hand" and we could temporarily escalate their participant status to become a "publisher" so they can ask a question using RTC mic + webcam (visible to all), and then (after the question has been asked), we could demote them from publisher back to subscriber.

Do you offer something like this? How much would it cost us for a 2-day workshop?

Thank you, Michael Altfield </blockqutoe>

  1. The wiki migration was scheduled for today, but we haven't finished validation. To be safe, we're pushing it back to tomorrow. I updated the change page CHG-2018-05-22 and the banner message at the top of the site
  2. I spent another 30 minutes building the 3d printer bed
    1. the vice grip supplied is super cheap & doesn't clamp evenly
    2. I finished the Y-assembley, so next step is "10_Quadruple"
  3. Catarina's internet is latent & slow. It's provided by Satellite for $70/month. It has a max bandwidth/month cap that she hits ~1/2 way through the month. I spoke with Catarina about linking Marcin's Microhouse with the Seedhouse.
    1. Marcin's microhouse has lower latency & generally faster speeds. It works better for a VPN thatn the sattelite conntection. And it's cheaper. But their provider (and it's a monopoly, of course) is unreliable--loosing internet for days is not uncommon. The sattelite connection, ironically, has higher availability.
    2. The distance between the seedhome and the microhome is 1000 ft. The max recommended distance for cat5 is 100 meters = 300 ft.
    3. The satellite plan at the seedhome is also locked-in for 2 years.
    4. c'est la vie. it is what it is. long term solution: open source satellites. But slightly less long term: trench a conduit line between the structures and drop two lines of fiber optics in each.
  4. I fixed a modsecurity issue with adding the following block to https://www.opensourceecology.org/wp-admin/post.php?post=462&action=edit
<em><a href="https://www.opensourceecology.org/wp-content/uploads/2009/01/1-basic-building.jpg"><img src="https://www.opensourceecology.org/wp-content/uploads/2009/01/1-basic-building.jpg" /></a></em>
    1. 973324 IE XSS
    2. 973317 IE XSS
  1. changed Marcin's password again for the ephemeral wiki so it's >20 characters. I sent it to him on wire. Now he can proceed with the validation.
  2. I began searching on hetzner1 to see if we had any copy of our old wp-content/upload files pre-dating 2012. Unfortunaetly, the live site I migrated from also only has uploaded files back from 2012
osemain@dedi978:~/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads$ date
Tue May 22 23:23:32 CEST 2018
osemain@dedi978:~/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads$ pwd
/usr/home/osemain/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads
osemain@dedi978:~/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads$ ls -lah
total 36K
drwxr-xr-x  9 osemain users   4.0K Jan  1 01:11 .
drwxr-xr-x  8 osemain osemain 4.0K Mar  2 03:59 ..
drwxr-xr-x  6 osemain users   4.0K Feb 14  2014 2012
drwxr-xr-x 10 osemain users   4.0K Feb 17  2014 2013
drwxr-xr-x 13 osemain users   4.0K Dec  1  2014 2014
drwxr-xr-x 14 osemain users   4.0K Dec  1  2015 2015
drwxr-xr-x 14 osemain users   4.0K Dec  1  2016 2016
drwxr-xr-x 14 osemain users   4.0K Dec  1 01:05 2017
drwxr-xr-x  5 osemain users   4.0K Mar  1 01:03 2018
osemain@dedi978:~/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads$ 
  1. I found a backup that was apparently made in 2017 by someone ~6 months before I became an OSE dev. I confirmed that it also only has uploads starting in 2012.
osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ date
Tue May 22 23:41:16 CEST 2018
osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ pwd
/usr/home/osemain/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball
osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ ls
wp-activate.php             wp-blog-header.php    wp-config-sample.php  wp-includes        wp-login.php     wp-signup.php
wp-admin                    wp-comments-post.php  wp-content            wp-links-opml.php  wp-mail.php      wp-trackback.php
wp-backup_1-17-2017.tar.gz  wp-config.php         wp-cron.php           wp-load.php        wp-settings.php  wp-xmlrpc.php
osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ ls -lah wp-content/uploads/
total 32K
drwxr-xr-x  8 osemain osemain 4.0K Jan  1  2017 .
drwxr-xr-x  8 osemain osemain 4.0K Jan 18  2017 ..
drwxr-xr-x  6 osemain osemain 4.0K Feb 14  2014 2012
drwxr-xr-x 10 osemain osemain 4.0K Feb 17  2014 2013
drwxr-xr-x 13 osemain osemain 4.0K Dec  1  2014 2014
drwxr-xr-x 14 osemain osemain 4.0K Dec  1  2015 2015
drwxr-xr-x 14 osemain osemain 4.0K Dec  1  2016 2016
drwxr-xr-x  3 osemain osemain 4.0K Jan  1  2017 2017
osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ 
  1. I also checked the old hetzner1 cron jobs, but there was only one to create an xml backup of the wiki. Nothing existed for wordpress
  2. I emailed Elifarley asking if he may know of some old backups we may have of the wordpress site from before 2011
  3. I did some more digging, and I did find the images on a distinct user on the same machine at hetzner1:/usr/www/users/oseblog/wp-content/uploads/
oseblog@dedi978:~/.ssh$ date
Wed May 23 07:53:45 CEST 2018
oseblog@dedi978:~/.ssh$ ls -lah /usr/www/users/oseblog/wp-content/uploads/
total 112K
drwxrwxrwx 14 oseblog oseblog 4.0K Jan  1  2014 .
drwxr-xr-x 20 oseblog oseblog 4.0K Nov 30  2014 ..
drwxrwxrwx  5 oseblog oseblog 4.0K Jan 18  2010 2007
drwxrwxrwx 14 oseblog oseblog 4.0K Jan 18  2010 2008
drwxrwxrwx 14 oseblog oseblog 4.0K Jan 18  2010 2009
drwxrwxrwx 14 oseblog oseblog 4.0K Dec 18  2010 2010
drwxrwxrwx 14 oseblog oseblog 4.0K Dec  3  2011 2011
drwxrwxrwx 14 oseblog users   4.0K Dec  1  2012 2012
drwxrwxrwx 14 oseblog users   4.0K Dec  1  2013 2013
drwxrwxrwx 14 oseblog users   4.0K Dec  1  2014 2014
drwxrwxrwx 11 oseblog oseblog 4.0K Sep 15  2010 avatars
drwxrwxrwx  3 oseblog oseblog 4.0K Aug 16  2010 group-avatars
-rw-r--r--  1 oseblog oseblog   94 Aug  8  2011 .htaccess
drwxrwxrwx  2 oseblog users    24K Nov  1  2011 img
--w-------  1 oseblog users    25K Jul 26  2011 lib.php
drwxrwxrwx  2 oseblog oseblog 4.0K Aug 21  2010 podpress_temp
-rw-r--r--  1 oseblog users    122 Sep 11  2012 system.php
oseblog@dedi978:~/.ssh$ 
  1. I did another 30 minutes on the jellybox
    1. The number of zipties was off from the video & my version of the jellybox. Not a big deal.
    2. I finished part 10, 11, & 12. Next up: 13: Wiring
  2. Operated the Auger for Marcin & Catarina to plant grapes & evergreen trees at Seedhome. The Auger broke just before the last couple holes. A plastic piece to stabilize the handle fell of (the two bolts were loosened & fell out. One was found, the other was not). And the electrical switch for turning it off stopped working. Marcin squeezed the gas line to make the engine stall.
  3. Made a big batch of hummus
  4. Marcin got back to me on the wiki validation, and he found a ton of issues
    1. One of the first issues was he couldn't edit the 'Flashy XM' wiki article due to a mod_security 403 Forbidden. I fixed it by whitelisting the following rules
    2. 973337 XSS
    3. 958018 XSS
    4. 958407 XSS
    5. 958039 XSS
    6. 973303 XSS
    7. 973315 XSS
    8. 973346 XSS
    9. 973321 XSS

Mon May 21, 2018

  1. published log, updated hours
  2. Marcin sent me an email about missing pictures from a post in 2008 from blog.opensourcecology.org, but we didn't have any uploads before 2011

[root@hetzner2 uploads]# pwd
/var/www/html/www.opensourceecology.org/htdocs/wp-content/uploads
[root@hetzner2 uploads]# ls
2012  2013  2014  2015  2016  2017  2018
[root@hetzner2 uploads]#

I did a search for that image in all the years, and nothing resulted.

[root@hetzner2 uploads]# find . | grep -i 'basic-building'
[root@hetzner2 uploads]#
    1. Marcin said we should check old backups for this image data
  1. other email follow-up
  2. attempted to build IMade3D Jellybox. I have never used nor operated a 3d printer. I've only seen & read about them. Now, I attempted to build this machine by myself
    1. I spent 3.5 hours on this today build today.
    2. the box didn't actually include instructions; it included a few pages that said "go here on our website to view the instructions." So I mostly ended up watching videos on youtube that break apart the steps
    3. the videos didn't tell me which box everything was in. The first task was to install thread lock on set scrws for the puller motors. Where is the thread lock? Does this kit even include it? Finally, it was found in the Tools box. Once you open each box, it has a checklist with the items inside that box. It would be great, however, if the instructions (either the ones printed or the ones found online) iterated all the boxes and specified what each box contains. And then, for each step, it mentioned "you'll need X part, which can be found in the Y box"
    4. some of the items were actually already built, so I just dumbly watched a video describing how to build something that was already built. Okay..
    5. putting the "bird on the branch" was brilliant. Even though I didn't have to do this; it was already built!
    6. the kit didn't include a phillips head screwdriver, which is necessary for installing the fan. not a big deal, but worth noting
    7. the wires were already run in the box. That was nice.
    8. in general, having videos is great. If a picture says a thousand words, then a video says a thousand pictures (well, ~30 pictures per second). Being able to see the thing oriented rather than trying to figure out what is the right vs left side is critical.
      1. that said, the videos on youtube were just 360p. This should really, really be high def. If I want to freeze frame to see which small hole a bolt is going though, it's necessary to have high def so I get a detailed picture instead of a blur.
  3. I met with Marcin to set him up with an ext3 backup usb drive that contains a 20M veracrypt file container. This encrypted volume contains:
    1. a backup of his pgp key (I created a new pgp key for him yesterday + revoked his old key from earlier this year as he forgot the password)
    2. a backup of his ssh key (I created a new ssh key for him yesterday)
    3. a backup of his personal keepass file (We created a new keepass for him yesterday)
    4. a backup of our shared ose keepass, which lives on the ose server. He also has a backup of the keyfile used to decrypt our shared ose keepass on this drive as well.
    5. a redundant backup of our key file used to encrypt our server backups
  4. Marcin's laptop now has ssh access to hetzner2 just by typing `ssh www.opensourceecology.org`. There's a '$HOME/.ssh/config' file that sets the port correctly.
  5. Marcin has been trained on how to access our shared ose keepass remotely using sshfs.

Sun May 20, 2018

  1. I arrived to FeF! I finally met Marcin & Catarina in-person and got a tour of the workshop, seedhomes, etc.
  2. My purpose for this visit is primarily driven by the need to ensure that Marcin has access to our live keepass (which necessitates having ssh access to our server) and as well as a local copy of his ssh key, personal keepass, and our shared ose keepass. We may also migrate the wiki tomorrow, depending on the status of its validation

Fri May 18, 2018

  1. I consolidated all our modsecurity whitelists from all our wordpress sites into one long, numerically sorted list, then I added this to all our wordpress sites' vhost configs. This will prevent false-positive 403 issues that have been fixed on one wordpress site from cropping up on another. It's not ideal, but it's a pragmatic compromise.
[root@hetzner2 conf.d]# date
Fri May 18 14:29:10 UTC 2018
[root@hetzner2 conf.d]# pwd
/etc/httpd/conf.d
[root@hetzner2 conf.d]# grep -A 3 '<LocationMatch "/(wp-admin|ose-hidden-login)/">' * | grep 'SecRuleRemoveById' | tr " " "\n" | sort -un | grep -vi SecRuleRemoveById | tr "\n" " "
200003 200004 950001 950109 950120 950901 958008 958030 958051 958056 958057 959070 959072 959073 960015 960017 960020 960024 960335 960904 960915 970901 973300 973301 973304 973306 973316 973327 973329 973330 973331 973332 973333 973334 973335 973336 973337 973338 973344 973347 981172 981173 981231 981240 981242 981243 981244 981245 981246 981248 981253 981257 981317 981318 981319 [root@hetzner2 conf.d]# 
[root@hetzner2 conf.d]# 
    1. this was applied to the following vhost files
[root@hetzner2 conf.d]# grep -irl '<LocationMatch "/(wp-admin|ose-hidden-login)/">' *
000-www.opensourceecology.org.conf
00-fef.opensourceecology.org.conf
00-oswh.opensourceecology.org.conf
00-seedhome.openbuildinginstitute.org.conf
00-www.openbuildinginstitute.org.conf
staging.opensourceecology.org.conf
[root@hetzner2 conf.d]# 
    1. for records, here is what the files had before the change
[root@hetzner2 conf.d]# grep -irA 3 '<LocationMatch "/(wp-admin|ose-hidden-login)/">' * | grep 'SecRuleRemoveById'
000-www.opensourceecology.org.conf-                     SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 958057 958056 973327 973337 950001 973336 958051 973331 973330 959070 958008 973329 960024
00-fef.opensourceecology.org.conf-                      SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 960915 200003
00-oswh.opensourceecology.org.conf-                     SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246
00-seedhome.openbuildinginstitute.org.conf-                        SecRuleRemoveById 960015 981173 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120
00-www.openbuildinginstitute.org.conf-                        SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 981318
staging.opensourceecology.org.conf-                     SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 958057 958056 973327 973337 950001 973336 958051 973331 973330 959070
[root@hetzner2 conf.d]# 
    1. I actually just wrapped up this new mod_security whitelist rules into a new file at /etc/httpd/conf.d/mod_security.wordpress.include . This way, when we add one, we add it to all sites.
      1. I intentionally did not do this with the other common wordpress blocks, such as blocking of '.git' dirs, blocking 'wp-login.php', etc as I don't want someone to comment-out the include in attempt to debug a mod_security issue, and suddenly disable these other critical security blocks which never false-positive like mod_security. Also, this mod_security stuff actually needs to be updated so the include file helps. The other stuff is essentially static.
  1. I also added a block that prevents files from being executed by php that have been placed into the uploads dir
   # don't execute any php files inside the uploads directory                                                                                                      
   <LocationMatch "/wp-content/uploads/">                                                                                                                          
	  php_flag engine off                                                                                                                                          
   </LocationMatch>                                                                                                                                                
   <LocationMatch "/wp-content/uploads/.*(?i)\.(cgi|shtml|php3?|phps|phtml)$">                                                                                     
	  Order Deny,Allow                                                                                                                                             
	  Deny from All                                                                                                                                                
   </LocationMatch>   
  1. Marcin said we should migrate the wiki Tuesday pending validation.
    1. I spent some time formally documenting all the wiki changes here http://opensourceecology.org/wiki/CHG-2018-05-22
    2. I added a banner notice message to the prod wiki site with "$wgSiteNotice" to inform our users of the upcoming maintenance window, and I linked to the CHG above
  2. Marcin sent me another 403 forbidden false-positive. I whitelisted 950907 = "generic" / "system command injection" attack and asked him to try again
  3. attempted to update the "3d printer workshop" page and I immediately got some modsecurity false-positives, which I whitelisted
    1. 981256 sqli
    2. 981249 sqli
  4. Marcin sent me another string that was triggering modsec false-positives. The fix was to whitliest these rules:
    1. 958413 xss

Thr May 17, 2018

  1. Marcin forwarded me a security alert from Dreamhost that our server had been sending spam from the 'ose_marcin' account. Note that recently we got an alert from them about the 'ose_community' account on that server, which had been running drupal. I changed that user's password & shutdown the vhost already. Now this is a distinct account! But trying to investigate this damn incident on a shared server without root is like trying to weld with a soldering iron. I sent them an email asking many questions & for more information about what happened.

Wed May 16, 2018

  1. My request to join the meetecho-janus google group yesterday was approved
  2. I posted a thread to the meetecho-janus google group asking for janus security best-practices https://groups.google.com/forum/#!topic/meetecho-janus/0Vx_Vl0hmwU
  3. I updated my git issue. Lorenzo updated their site, and I was able to confirm that the issue occurs there too. https://github.com/meetecho/janus-gateway/issues/1233
  4. I tried to research ICE hardening, but again the searches for security around webrtc lead to marketing guides talking about how secure it is for the client
  5. while I wait for a response from the janus community on my hardening thread, I began to research how we can administer the videoroom. Specifically, we need to be able to select which participants can become a publisher instead of just a subscriber. This is something OpenTok does well for clients like the MLB, but that shit ain't open.
  6. there are some janus configuration options regarding authentication with the api, but that appears to be all-or-nothing auth. There doesn't appear to be anything that would specifically allow a subscriber to escalate themselves to becoming a producer. https://janus.conf.meetecho.com/docs/auth.html
  7. found this which says "you can control who can join, but you can not control his activities after join." https://groups.google.com/forum/#!searchin/meetecho-janus/videoroom$20publish$20authentication%7Csort:date/meetecho-janus/TJivBoiOXA0/KaqrfKx0AwAJ
  8. so we may have to write a modified version of the videoroom.
  9. or we can just password protect the whole videoroom, and then just capture the videoroom somehow and rebroadcast it through another subscribe-only channel similar to how youtube live works.
    1. I posted this question in all of its ignorance here; we'll see what happens.. https://janus.conf.meetecho.com/docs/auth.html

Tue May 15, 2018

  1. the main dev behind Janus is Lorenzo Miniero, and he's the one that responded to my git issue in <5 minutes. I found an interview with him about the Open Source Janus Gateway here after reviewing his LInkedIn https://www.linkedin.com/pulse/meet-meetecho-janus-gateway-fabian-bernhard
  2. he also came from The University of Naples Federico II in Naples, Italy--which is where a lot of these WebRTC experts appear to have originated..
  3. I applied to write messages on the 'meetecho-janus' mailing list. after I'm approved, I'll ask the community if there's any guides on how to harden Janus' configuration or security best-practices. For example, file permissions, hardened configuration options for the each of the config files (main, transport, plugins, etc)

Mon May 14, 2018

  1. continuing to debug why jangouts' text chat didn't work. The text room demo in janus also failed, and it said that it sent the data with "data-channels"
  2. I could not find out what the initial configure options were when I compiled janus (`janus --version` doesn't list it)
  3. I tried to reconfigure janus, this time explicitly setting '--enable-data-channels'. It failed with an error from libusrsctp.
[root@ip-172-31-28-115 janus-gateway]# ./configure --enable-data-channels
...
checking for srtp_crypto_policy_set_aes_gcm_256_16_auth in -lsrtp2... yes
checking for usrsctp_finish in -lusrsctp... no
configure: error: libusrsctp not found. See README.md for installation instructions or use --disable-data-channels
[root@ip-172-31-28-115 janus-gateway]# 
  1. the main janus gateway git README explicitly lists usrsctp as a dependency, stating "(only needed if you are interested in Data Channels)". It links to the usrsctp github here https://github.com/sctplab/usrsctp
  2. the usrsctp github doesn't provide instructions for centos7. It states that it's tested for FreeBSD, Ubuntu, Windows, & Mac.
  3. I attempted to compile it manually, as internet searches suggested that it's not in any yum repo.
pushd /root/sandbox
git clone https://github.com/sctplab/usrsctp
pushd usrsctp
./bootstrap
./configure && make && sudo make install
popd
popd
  1. trying to reconfigure lists "DataChannels support: yes", which I confirmed was previously "no"
config.status: executing libtool commands

libsrtp version:           2.x
SSL/crypto library:        OpenSSL
DTLS set-timeout:          not available
DataChannels support:      yes
Recordings post-processor: no
TURN REST API client:      yes
Doxygen documentation:     no
Transports:
	REST (HTTP/HTTPS):     yes
	WebSockets:            no
	RabbitMQ:              no
	MQTT:                  no
	Unix Sockets:          yes
Plugins:
	Echo Test:             yes
	Streaming:             yes
	Video Call:            yes
	SIP Gateway (Sofia):   no
	SIP Gateway (libre):   no
	NoSIP (RTP Bridge):    yes
	Audio Bridge:          no
	Video Room:            yes
	Voice Mail:            no
	Record&Play:           yes
	Text Room:             yes
	Lua Interpreter:       yes
Event handlers:
	Sample event handler:  yes
	RabbitMQ event handler:no
JavaScript modules:        no

If this configuration is ok for you, do a 'make' to start building Janus. A 'make install' will install Janus and its plugins to the specified prefix. Finally, a 'make configs' will install some sample configuration files too (something you'll only want to do the first time, though).

[root@ip-172-31-28-115 janus-gateway]# 
  1. I recompiled & deployed with `make && make install`, then restarted janus. Unfortunately, I have the same issue
Session: 8126727588102204
Handle: 3978092954853617
Processing POST data (application/json) (310 bytes)...
[transports/janus_http.c:janus_http_handler:1248]   -- Data we have now (310 bytes)
[transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/8126727588102204/3978092954853617...
[transports/janus_http.c:janus_http_handler:1223]  ... parsing request...
Session: 8126727588102204
Handle: 3978092954853617
Processing POST data (application/json) (0 bytes)...
[transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer
{"janus":"message","body":{"request":"ack"},"transaction":"RvFOm1M7roLf","jsep":{"type":"answer","sdp":"v=0\r\no=- 6893769308065182494 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=msid-semantic: WMS\r\nm=application 0 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"}}
Forwarding request to the core (0x7f00ec000ca0)
Got a Janus API request from janus.transport.http (0x7f00ec000ca0)
Transport task pool, serving request
[3978092954853617] There's a message for JANUS TextRoom plugin
[3978092954853617] Remote SDP:
v=0
o=- 6893769308065182494 2 IN IP4 127.0.0.1
s=-
t=0 0
a=msid-semantic: WMS
m=application 0 DTLS/SCTP 5000
c=IN IP4 0.0.0.0
a=mid:data
a=sctpmap:5000 webrtc-datachannel 1024
[3978092954853617] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have NOT been negotiated
[WARN] [3978092954853617] Skipping disabled/unsupported media line...
[ERR] [janus.c:janus_process_incoming_request:1193] Error processing SDP
[RvFOm1M7roLf] Returning Janus API error 465 (Error processing SDP)
  1. I stumbled on yet another open source webrtc SFU based on node = Mediasoup https://mediasoup.org/about/
  2. I also found a formal description of SFUs in RFC7667 https://tools.ietf.org/html/rfc7667#section-3.7
  3. doh! It looks like my "./configure" today didn't have the "--prefix /opt/janus" as I used per the README in their github, so my test above was using the old version https://github.com/meetecho/janus-gateway
[root@ip-172-31-28-115 janus-gateway]# LD_LIBRARY_PATH=/usr/lib && /opt/janus/bin/janus --version
Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd
Compiled on:  Fri May  4 00:11:11 UTC 2018

janus 0.4.0
[root@ip-172-31-28-115 janus-gateway]# LD_LIBRARY_PATH=/usr/local/lib && janus --version
Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd
Compiled on:  Mon May 14 14:25:03 UTC 2018

janus 0.4.0
[root@ip-172-31-28-115 janus-gateway]# which janus
/usr/local/bin/janus
[root@ip-172-31-28-115 janus-gateway]# 
  1. I did the compile again, and here's the result
[root@ip-172-31-28-115 janus-gateway]# LD_LIBRARY_PATH=/usr/lib && /opt/janus/bin/janus --version
Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd
Compiled on:  Mon May 14 15:45:32 UTC 2018

janus 0.4.0
[root@ip-172-31-28-115 janus-gateway]# 
  1. I had issues starting janus, which were resolved by adding '/usr/local/lib' to '/etc/ld.so.conf.d/janus.conf' and running `ldconfig`
  2. unfortunately, I have the same issue. note that the sdp message is distinct in chrome & firefox
    1. here's the sdp message in chromium (per the janus server logs on highest verbosity)
Session: 6375072996036015
Handle: 1600250370708259
Processing POST data (application/json) (0 bytes)...
[transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer
{"janus":"message","body":{"request":"ack"},"transaction":"LMu5bjOxNA1q","jsep":{"type":"answer","sdp":"v=0\r\no=- 8310479853867794458 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=msid-semantic: WMS\r\nm=application 0 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"}}
Forwarding request to the core (0x7f1538000f80)
Got a Janus API request from janus.transport.http (0x7f1538000f80)
Transport task pool, serving request
[1600250370708259] There's a message for JANUS TextRoom plugin
[1600250370708259] Remote SDP:
v=0
o=- 8310479853867794458 2 IN IP4 127.0.0.1
s=-
t=0 0
a=msid-semantic: WMS
m=application 0 DTLS/SCTP 5000
c=IN IP4 0.0.0.0
a=mid:data
a=sctpmap:5000 webrtc-datachannel 1024
[1600250370708259] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have NOT been negotiated
[WARN] [1600250370708259] Skipping disabled/unsupported media line...
[ERR] [janus.c:janus_process_incoming_request:1193] Error processing SDP
[LMu5bjOxNA1q] Returning Janus API error 465 (Error processing SDP)
    1. and here's the same thing when the client running the textroomtest demo is firefox instead
Session: 654029176767371
Handle: 6994444633419195
Processing POST data (application/json) (0 bytes)...
[transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer
{"janus":"message","body":{"request":"ack"},"transaction":"HtiY4UW9UZDF","jsep":{"type":"answer","sdp":"v=0\r\no=mozilla...THIS_IS_SDPARTA-50.1.0 4746781219317630708 0 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 CC:E1:78:4E:53:A6:A7:9F:DB:06:B4:4C:68:E8:FB:8B:B3:C7:56:C8:8D:B8:F0:A8:B4:5F:E4:45:FF:1B:39:7B\r\na=group:BUNDLE\r\na=ice-options:trickle\r\na=msid-semantic:WMS *\r\nm=application 0 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=inactive\r\na=sctpmap:5000 rejected 0\r\n"}}
Forwarding request to the core (0x7f155001b920)
Got a Janus API request from janus.transport.http (0x7f155001b920)
Transport task pool, serving request
[6994444633419195] There's a message for JANUS TextRoom plugin
[6994444633419195] Remote SDP:
v=0
o=mozilla...THIS_IS_SDPARTA-50.1.0 4746781219317630708 0 IN IP4 0.0.0.0
s=-
t=0 0
a=fingerprint:sha-256 CC:E1:78:4E:53:A6:A7:9F:DB:06:B4:4C:68:E8:FB:8B:B3:C7:56:C8:8D:B8:F0:A8:B4:5F:E4:45:FF:1B:39:7B
a=group:BUNDLE
a=ice-options:trickle
a=msid-semantic:WMS *
m=application 0 DTLS/SCTP 5000
c=IN IP4 0.0.0.0
a=inactive
a=sctpmap:5000 rejected 0
[6994444633419195] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have NOT been negotiated
[6994444633419195] Fingerprint (global) : sha-256 CC:E1:78:4E:53:A6:A7:9F:DB:06:B4:4C:68:E8:FB:8B:B3:C7:56:C8:8D:B8:F0:A8:B4:5F:E4:45:FF:1B:39:7B
[WARN] [6994444633419195] Skipping disabled/unsupported media line...
[ERR] [janus.c:janus_process_incoming_request:1193] Error processing SDP
[HtiY4UW9UZDF] Returning Janus API error 465 (Error processing SDP)
  1. well, when starting janus, there is a warning stating that Data Channels support is *not* compiled
[root@ip-172-31-28-115 janus-gateway]# /opt/janus/bin/janus
...
[WARN] The libsrtp installation does not support AES-GCM profiles
Fingerprint of our certificate: D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38
[WARN] Data Channels support not compiled
[WARN] Event handlers support disabled
Plugins folder: /opt/janus/lib/janus/plugins
Loading plugin 'libjanus_recordplay.so'...
  1. ugh, I forgot `make clean` before the `make && make install`. adding that step got me much further! When I loaded the text room, it prompted me for my username (before it just hung indefinitely). Unfortunately, after this popped-up, I got a notification in the browser that we lost connection to the janus gateway. Hopping back to the server, I saw a Segmentation Fault :(
Session: 4989268396723854
Handle: 45723605327998
Processing POST data (application/json) (0 bytes)...
[transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer
{"janus":"message","body":{"request":"ack"},"transaction":"9tdbOIEVuv9q","jsep":{"type":"answer","sdp":"v=0\r\no=- 8019385961591100028 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE data\r\na=msid-semantic: WMS\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\nb=AS:30\r\na=ice-ufrag:MNDb\r\na=ice-pwd:8F39sum8obXhdVgCLhNhUVLo\r\na=fingerprint:sha-256 D5:D6:25:60:4D:24:9A:37:79:55:4C:B2:F4:99:B0:69:DE:A5:F4:F0:4C:72:CD:67:5C:0F:A9:17:BB:E1:FC:00\r\na=setup:active\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"}}
Forwarding request to the core (0x7fa31c004620)
Got a Janus API request from janus.transport.http (0x7fa31c004620)
Transport task pool, serving request
[45723605327998] There's a message for JANUS TextRoom plugin
[45723605327998] Remote SDP:
v=0
o=- 8019385961591100028 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE data
a=msid-semantic: WMS
m=application 9 DTLS/SCTP 5000
c=IN IP4 0.0.0.0
b=AS:30
a=ice-ufrag:MNDb
a=ice-pwd:8F39sum8obXhdVgCLhNhUVLo
a=fingerprint:sha-256 D5:D6:25:60:4D:24:9A:37:79:55:4C:B2:F4:99:B0:69:DE:A5:F4:F0:4C:72:CD:67:5C:0F:A9:17:BB:E1:FC:00
a=setup:active
a=mid:data
a=sctpmap:5000 webrtc-datachannel 1024 
[45723605327998] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have been negotiated
[45723605327998] Parsing SCTP candidates (stream=1)...
[45723605327998] ICE ufrag (local):   MNDb
[45723605327998] ICE pwd (local):     8F39sum8obXhdVgCLhNhUVLo
[45723605327998] Fingerprint (local) : sha-256 D5:D6:25:60:4D:24:9A:37:79:55:4C:B2:F4:99:B0:69:DE:A5:F4:F0:4C:72:CD:67:5C:0F:A9:17:BB:E1:FC:00
[45723605327998] DTLS setup (local):  active
[45723605327998] Setting accept state (DTLS server)
[45723605327998] Data Channel mid: data
Got a sctpmap attribute: 5000 webrtc-datachannel 1024
[45723605327998]   -- ICE Trickling is supported by the browser, waiting for remote candidates...
 -------------------------------------------
  >> Anonymized
 -------------------------------------------
Creating plugin result...
Sending Janus API response to janus.transport.http (0x7fa31c004620)
Got a Janus API response to send (0x7fa31c004620)
Destroying plugin result...
[45723605327998] Sending event to transport...
Sending event to janus.transport.http (0x7fa31c003ce0)
Got a Janus API event to send (0x7fa31c003ce0)
  >> Pushing event: 0 (Success)
[transports/janus_http.c:janus_http_handler:1137] Got a HTTP POST request on /janus/4989268396723854/45723605327998...
[transports/janus_http.c:janus_http_handler:1138]  ... Just parsing headers for now...
[transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089
[transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive
[transports/janus_http.c:janus_http_headers:1690] Content-Length: 227
[transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */*
[transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org
[transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36
[transports/janus_http.c:janus_http_headers:1690] content-type: application/json
[transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html
[transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, br
[transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8
[transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/4989268396723854/45723605327998...
[transports/janus_http.c:janus_http_handler:1223]  ... parsing request...
Session: 4989268396723854
Handle: 45723605327998
Processing POST data (application/json) (227 bytes)...
[transports/janus_http.c:janus_http_handler:1248]   -- Data we have now (227 bytes)
[transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/4989268396723854/45723605327998...
[transports/janus_http.c:janus_http_handler:1223]  ... parsing request...
Session: 4989268396723854
Handle: 45723605327998
Processing POST data (application/json) (0 bytes)...
[transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer
{"janus