Maltfield Log/2018 Q2
Sun May 27, 2018
- Helped Christian get his local wiki instance operational
Sat May 26, 2018
- Emailed with Christian about making an offline version of the wiki for browsing in kiwix like wikipedia & other popular wikis https://wiki.kiwix.org/wiki/Content
- we may have to install Parsoid and/or the VIsualEditor extension https://www.howtoforge.com/tutorial/how-to-install-visualeditor-for-mediawiki-on-centos-7/
- but I asked Christain to first look into methods that do not require Parsoid, such as zimmer http://www.openzim.org/wiki/Build_your_ZIM_file#zimmer
- Christian hit some 403 forbidden messages when hitting the mediawiki api, so I attempted to whitelist his ip address from all mod_security rules
# disable mod_security with rules as needed # (found by logs in: /var/log/httpd/modsec_audit.log) <IfModule security2_module> SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 960915 200003 981173 981318 981260 950911 973302 973324 973317 981255 958057 958056 973327 950018 950001 958008 973329 950907 950910 950005 950006 959151 958976 950007 959070 950908 981250 981241 981252 981256 981249 981251 973336 958006 958049 958051 973305 973314 973331 973330 973348 981276 959071 973337 958018 958407 958039 973303 973315 973346 973321 960035 # set the (sans file) POST size limit to 1M (default is 128K) SecRequestBodyNoFilesLimit 1000000 # whitelist an entire IP that we use for scraping mediawiki to produce # kiwix-ready zim files for archival & offline viewing SecRule REQUEST_HEADERS:X-Forwarded-For "@Contains 176.56.237.113" phase:1,nolog,allow,pass,ctl:ruleEngine=off,id:1 </IfModule>
- In testing, I accidentally banned myself. When validating, I saw that our server had banned 2 other IPs, which are crawlers. I went to their site, and found that they obey robots.txt's "Crawl-delay" option http://mj12bot.com/
[root@hetzner2 httpd]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP all -- crawl1.bl.semrush.com anywhere DROP all -- crawl-vfyrb9.mj12bot.com anywhere DROP all -- 184-157-49-133.dyn.centurytel.net anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:pharos ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:krb524 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:32415 LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix "iptables IN denied: " DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination DROP all -- crawl1.bl.semrush.com anywhere DROP all -- crawl-vfyrb9.mj12bot.com anywhere DROP all -- 184-157-49-133.dyn.centurytel.net anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- localhost.localdomain localhost.localdomain ACCEPT udp -- anywhere ns1-coloc.hetzner.de udp dpt:domain ACCEPT udp -- anywhere ns2-coloc.hetzner.net udp dpt:domain ACCEPT udp -- anywhere ns3-coloc.hetzner.com udp dpt:domain LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix "iptables OUT denied: " DROP tcp -- anywhere anywhere owner UID match apache DROP tcp -- anywhere anywhere owner UID match mysql DROP tcp -- anywhere anywhere owner UID match varnish DROP tcp -- anywhere anywhere owner UID match hitch DROP tcp -- anywhere anywhere owner UID match nginx [root@hetzner2 httpd]#
- so I created a robots.txt file per https://www.mediawiki.org/wiki/Manual:Robots.txt
cat << EOF > /var/www/html/wiki.opensourceecology.org/htdocs/robots.txt User-agent: * Disallow: /index.php? Disallow: /index.php/Help Disallow: /index.php/Special: Disallow: /index.php/Template Disallow: /wiki/Help Disallow: /wiki/Special: Disallow: /wiki/Template Crawl-delay: 15 EOF chown not-apache:apache /var/www/html/wiki.opensourceecology.org/htdocs/robots.txt chmod 0040 /var/www/html/wiki.opensourceecology.org/htdocs/robots.txt
- I reset the password for 'Zeitgeist_C.Real_-' & sent them an email
- did some back-of-the-envelope calculations for crypto currency mining with our wiki as piratebay does
- coinhive says 30 hashes/sec is reasonable. https://coinhive.com/info/faq
- yesterday we got 31,453 hits. much of that is probably spiders. Unfortunately, awstats doesn't give unique visitors per day, only per month. But we've basically been online for only one day, and the monthly says 2,422 unique visitors this month. So let's say we get 2,000 visitors per month. 81.4% of our traffic is <=30s long. Average visit is 208s (probably some people leave it open in the background for a very long time (= all our devs)). Let's be conservative & say that 100-81.4=18.6% of our daily users = 2000* 0.18 = 360 users are on the site for 30 seconds. That's 10800 seconds of mining
- Coinhive pays out in 0.000064 XMR per 1 million hashes. We'll be generating 10800 seconds/day * 30 hashes/s = 324,000 hashes per day. 324,000 * 30 = 9,720,000 hashes per month. 9720000/1000000 = 9.72 * 0.000064 xmr = 0.00062208 xmr / month.
- At today's exchange rate, that's $0.10 per month. Fuck.
- ^ that said, my calculations are extremely conservative. If we actually have 2,422 unique visitors spending an average of 208 seconds on the site, then it's 2422 * 208 = 503776 seconds per day. 503776 * 30 hashes/s = 15113280 hashes per day. 15113280 * 30 days = 453398400 hashes per month. 453398400/1000000 = 453.3984 * 0.000064 xmr = 0.029017498 xmr / month.
- At today's exchange rate, that's $4.85/month.
- So if we cryptomine on our wiki users, we're looking at between $0.10 - $5 per month profit. Meh.
- fixed mod_security false-positive
- whitelisted 981247 = SQLI
- I created a snashot of the wiki for Christian to build a local copy for zim-ifying it for kiwix
# DECLARE VARS snapshotDestDir='/var/tmp/snapshotOfWikiForChris.20180526' wikiDbName='osewiki_db' wikiDbUser='osewiki_user' wikiDbPass='CHANGEME' stamp=`date +%Y%m%d_%T` mkdir -p "${snapshotDestDir}" pushd "${snapshotDestDir}" time nice mysqldump -u"${wikiDbUser}" -p"${wikiDbPass}" --databases "${wikiDbName}" > "${wikiDbName}.${stamp}.sql"# DECLARE VARS snapshotDestDir='/var/tmp/snapshotOfWikiForChris.20180526' mkdir -p "${snapshotDestDir}" pushd "${snapshotDestDir}" time nice mysqldump --single-transaction -u"${wikiDbUser}" -p"${wikiDbPass}" --databases "${wikiDbName}" | gzip -c > "${wikiDbName}.${stamp}.sql.gz" time nice tar -czvf "${snapshotDestDir}/wiki.opensourceecology.org.vhost.${stamp}.tar.gz" /var/www/html/wiki.opensourceecology.org/*
- I drove to town to pickup some plexiglass and epoxy for the 3d printer
- I worked on Marin's computer for a bit, updated the OSE Marlin github to include compartmentalized Configration.h files for distinct D3D flavors, and added a new one with the LCD. I got the LCD connected & working. Then the SD card connected & working
- When we went to print from the SD card, the print nozzle was too high. We went back to Marlin to print, and the same thing happened.
- We need to fix the z probe (replace it?) before we can proceed further...
Fri May 25, 2018
- Waking up the day after the migration and, hooray, nothing is broken!
- I got a few emails from people saying thank you, a few asking me to delete their account from the wiki, and one from someone asking for help reseting their password.
- I confirmed that the self-password-reset function worked for me (it sent me an email with a temporary password, I used it, logged-in, then reset my password)
- I hopped over to awstats & munin. Finally, awstats is getting some good data.
- Munin looks good too
- obviously there's a jump in the graphs from after the wiki (our most popular site) piped more traffic to hetzner2. Most notably is the number of connections
- CPU & Load aren't much of a change. But if we didn't have varnish, that certainly wouldn't be the case.
- there's some minor changes to the varnish hit rate graph. A tiny sliver of orange hit-for-pass appeared at the top of the graph--that's probably our wiki users that are logged-in. But the hit rate still seems >90%, which is awesome! The dip in the hit rate was from me manually giving varnish a restart--surprisingly it didn't dip much, and it quickly returned to >90% hit rate!
- the number of objects in the varnish cache nearly doubled. I would have expected it to more than double, but maybe with time..
- I emailed screencaps of awstats & munin to Marcin & CCd Catarina
- I confirmed that the backup on hetzner1 finished, and that it included the wiki
osemain@dedi978:~$ bash -c 'source /usr/home/osemain/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh hetzner1/*' 12G hetzner1/20180501-052002 259M hetzner1/20180502-052001 0 hetzner1/20180520-052001 12G hetzner1/20180521-052001 12G hetzner1/20180522-052001 12G hetzner1/20180523-052001 12G hetzner1/20180524-052001 481M hetzner1/20180524-141649 12G hetzner1/20180524-233533 12G hetzner1/20180525-052001
- I moved the now-static wiki files from the old server into the noBackup dir, so that we won't have those being copied to dreamhost nightly anymore (the backup size should drop from 12G to 0.5G).
osemain@dedi978:~$ mkdir noBackup/deleteMeIn2019/osewiki_olddocroot osemain@dedi978:~$ mv public_html/w noBackup/deleteMeIn2019/osewiki_olddocroot/ osemain@dedi978:~$
- the backups on hetnzer2 aren't growing like crazy, so that bug appears to have been fixed
osemain@dedi978:~$ bash -c 'source /usr/home/osemain/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh hetzner2/*' 15G hetzner2/20180501-072001 43G hetzner2/20180524_072001 15G hetzner2/20180525_001249 15G hetzner2/20180525_072001 osemain@dedi978:~$
- total disk usage on dreamhost is 172G; hopefully that won't trigger their action on us again. Now that the wiki is migrated, many other things that were blocked can begin to move--one of which is that we can move our backups onto s3, which will now be ~15G daily it seems.
osemain@dedi978:~$ bash -c 'source /usr/home/osemain/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh .' 172G . osemain@dedi978:~$
- we still didn't get a green lock on firefox for ssl because some content was https. I changed all the links on our Main_Page at least to be https where possible, but there was another culprit: our Creative Commons image in the bottom-right of every page. I changed this in LocalSettings.php from 'http://mirrors.creativecommons.org/presskit/buttons/88x31/png/by-sa.png' to 'https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by-sa.png'. After clearing the varnish cache with `varnishadm 'ban req.url ~ "."'`, we got the green lock on firefox on our main page!
- I sent Ahmed (Shahat.log) a new password that I manually set for him
- for the 2x users who asked for their accounts to be deleted: unfortunately, I deleting mediawiki users isn't supported https://meta.wikimedia.org/wiki/MediaWiki_FAQ#How_do_I_delete_a_user_from_my_list_of_users.3F
- instead, I offered them to rename their account and/or change their email address
- I documented how to "block" users by changing their email address, renaming their account, and banning their account
- now that the wiki stuff is stable, I finally got a chance to merge-in the images I found on the 'oseblog'
- Emailed Marcin about this & discussed about using relative links
- documented TODO for OSE Linux needing encrypted persistence https://wiki.opensourceecology.org/wiki/OSE_Linux#Persistence
- emailed Christian asking him to look into adding encrypted persistence into OSE Linux, providing links for how TAILS does it
- Marcin noticed that we had anonymous edits permitted. I looked in the LocalSettings.php file & found this
$wgGroupPermissions['*']['edit'] = true;
- I commented that out, replacing it with the line
$wgGroupPermissions['*']['edit'] = false; #$wgGroupPermissions['*']['edit'] = true;
- I'm not sure why that didn't permit anon edits before, but it's fixed now!
- I did some further cleanup of the "wiki locking" section of LocalSettings.php. This included commenting-out the line that permitted '*' users from creating accounts. It's misleading.
- before
################ # WIKI LOCKING # ################ # uncomment this to put a banner message on every page leading up to a # maintenance window #$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki will be temporarily made READ-ONLY during a maintenance window today at 15:00 UTC. Please finish & save any pending changes to avoid data loss.</div>'; #$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki is currently undergoing maintenance and is therefore in an ephemeral state. Any changes made to this wiki will be lost! Please check back tomorrow.</div>'; # temp wiki locks $wgGroupPermissions['*']['edit'] = false; #$wgGroupPermissions['*']['edit'] = true; $wgGroupPermissions['*']['createaccount'] = true; #$wgGroupPermissions['*']['createaccount'] = false; ## Account creation disabled by TLG 06/18/2015 $WgGroupPermissions['user']['edit'] = true; $wgGroupPermissions['sysop']['createaccount'] = true; #$wgGroupPermissions['sysop']['createaccount'] = false; ## Account creation disabled by TLG 06/18/2015 $wgGroupPermissions['sysop']['edit'] = true; $wgReadOnlyFile="$IP/read-only-message";
- after
################ # WIKI LOCKING # ################ # uncomment this to put a banner message on every page leading up to a # maintenance window #$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki will be temporarily made READ-ONLY during a maintenance window today at 15:00 UTC. Please finish & save any pending changes to avoid data loss.</div>'; #$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki is currently undergoing maintenance and is therefore in an ephemeral state. Any changes made to this wiki will be lost! Please check back tomorrow.</div>'; # only registered users can edit $wgGroupPermissions['*']['edit'] = false; $WgGroupPermissions['user']['edit'] = true; $wgGroupPermissions['sysop']['edit'] = true; # only sysop users can create accounts $wgGroupPermissions['*']['createaccount'] = false; $wgGroupPermissions['sysop']['createaccount'] = true; $wgReadOnlyFile="$IP/read-only-message";
- Marcin asked me to add 'zip' to the list of possible uploads to the wiki. Done.
- I returned to work on the jellybox
- started at 14:20. Paused at 15:00. minus 30 min for helping Marcin with git
- somehow I skipped the part where the back is installed. Maybe it was absent or I was absent minded. Any way, after I installed the so-called "last" acrylic piece on the top & saw that the back was already attached, I just snapped in the back at that point
- somehow I made it to the end without installing the proximity sensor. I went back to this doc & did it https://docs.imade3d.com/Guide/Assemble+the+X+Assembly/147
- the info on the checklist after the build was *extremely* lacking, so I just went off guide & started poking it at this point https://docs.imade3d.com/Wiki/Easy_Kit_Flow#Section_Checkpoint_It_s_Alive
- I stuck in the sd card, navigated to some thing, and told it to print
- I threw the filament spool where it looked like it went, clipped off the end, stuck it in where it looked like it belonged, and clipped it shut
- It heated up, moved around, and then immediately ran horizontally with the extruder tip slamming into the bed. I quickly turned off the big friendly button (power switch)!
- the guide was wrong for setting the "x homing offset". It should be "Settings -> Motion -> X home offset", but it said simply "Settings > X Homing Offset". I found it after poking around the menu for a bit https://docs.imade3d.com/Guide/%E2%86%B3+X+Homing+Offset/213
- there was no y homing offset option
- I finished my first print at 18:19. That's a 10.5 hour build time.
- I did some polishing of the first layer z offset to prevent it from looking like spagetti on that first layer
- I thought my next print was good, such as the fish skeleton, but Marcin pointed out that the fish's joints should actually be free to move. we broke the joints a bit & got motion, but apparently it should be printed so precise that we shouldn't have to break it
- I tightened the z motor. they were still loose from my initial build, where the video said to leave it loose. Subsequent prints were better. I got the gears to spin, but--again--only after some breaking
- now prints are sliding around on the surface before the print is finished. I'll have to fix that tomorrow
Thr May 24, 2018
- today is the big day! I began the wiki migration at ~14:00 UTC
- first step: I moved the wiki to a new home outside the docroot, effectively turning the existing wiki off (and making the state of it immutable)
osemain@dedi978:~$ date Thu May 24 16:07:07 CEST 2018 osemain@dedi978:~$ pwd /usr/home/osemain osemain@dedi978:~$ mkdir ~/wiki_olddocroot osemain@dedi978:~$ mv public_html/w ~/wiki_olddocroot osemain@dedi978:~$
- I created a simple html file indicating that we're down for maintenance where the old site was.
osemain@dedi978:~$ mkdir public_html/w osemain@dedi978:~$ echo 'This wiki is currently down for maintenance. Please check back tomorrow.' > public_html/w/index.html osemain@dedi978:~$
- now that the wiki can't change mid-backup, I initiated the backup process. This will should our last backup of the site pre-upgrade. Note that, after this completes, I'll need to move the '~/wiki_olddocroot' dir into '~/noBackup/deleteMeIn2019/wiki_olddocroot' else our 18G immutably version of the oold wiki will still be backed-up daily superciliously
osemain@dedi978:~$ date Thu May 24 16:16:45 CEST 2018 osemain@dedi978:~$ # STEP 0: CREATE BACKUPS osemain@dedi978:~$ source /usr/home/osemain/backups/backup.settings osemain@dedi978:~$ /usr/home/osemain/backups/backup.sh ================================================================================ Beginning Backup Run on 20180524-141649 ...
- I know from experience that the hetzner1 backups take an absurdly long time to complete, but hetnzer2 is only a couple hours. I probably won't wait for both to finish. I will kick off the one on hetzner2 and kick off the migration of the data to hetzner2 at the same time. Then, after the hetzner2 backup finishes, I'll tear down the ephemeral wiki & replace it with the migrated data.
- First, I start the backups on hetzner2
root@hetzner2 ~]# # STEP 0: CREATE BACKUPS [root@hetzner2 ~]# # for good measure, trigger a backup of the entire system's database & files: [root@hetzner2 ~]# time /bin/nice /root/backups/backup.sh &>> /var/log/backups/backup.log
- Then I kicked-off the file/data transfer from hetzner1 to hetnzer2. Note that I had to modify some of these vars because I moved the files outside the docroot as the only way to stop writes to the db + files (hetzner1 is a shared hosing server, so I can't mess with the vhost configs or bounce the httpd service, etc one of the reasons we're migrating to hetzner2!)
# DECLARE VARIABLES source /usr/home/osemain/backups/backup.settings stamp=`date +%Y%m%d` backupDir_hetzner1="/usr/home/osemain/noBackup/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" #vhostDir_hetzner1='/usr/www/users/osemain/w' vhostDir_hetzner1='/usr/home/osemain/wiki_olddocroot/w' dbName_hetzner1='osewiki' dbUser_hetzner1="${mysqlUser_wiki}" dbPass_hetzner1="${mysqlPass_wiki}" # STEP 1: BACKUP DB mkdir -p ${backupDir_hetzner1}/{current,old} pushd ${backupDir_hetzner1}/current/ mv ${backupDir_hetzner1}/current/* ${backupDir_hetzner1}/old/ time nice mysqldump -u"${dbUser_hetzner1}" -p"${dbPass_hetzner1}" --all-databases --single-transaction | bzip2 -c > ${backupDir_hetzner1}/current/${backupFileName_db_hetzner1} # STEP 2: BACKUP FILES time nice tar -czvf ${backupDir_hetzner1}/current/${backupFileName_files_hetzner1} ${vhostDir_hetzner1}
- while those 3 backups ran, I logged into our cloudflare account
- Unrelated, but after I logged-into our cloudflare account, I reset the password to a new 100-char random password. I stored this to our shared keepass.
- in the dns tab, I disabled CDN/DDOS protection for the 'opensourceecology.org' domain. That's the last one that had that enabled. So, after this migration, we'll be able to either cancel our cloudlfare account (just go with the free version) or migrate our DNS back to dreamhost. iirc, cloudflare provides some free services, and they're actually a pretty good low-latency dns provider. So maybe we keep them as a free nameserver. Either way, let's stop paying them!
- The new site is going to be 'wiki.opensourceecology.org', which is already pointed to hetnzer2. Only after we've finished validating this new wiki will I make 'opensourceecology.org' point to hetzner2. Then I'll probably just setup some 301 redirect with modrewrite to point all 'opensourceecology.org/w/*' requests to 'wiki.opensourceecology.org/'.
- I added a message to the top of our 'wiki.opensourceecology.org' site to warn users that they shouldn't make changes until after the migration is complete, lest they loose their work!
$wgSiteNotice = '<div class="notify-warning" style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;">ⓘ NOTICE: This wiki is currently undergoing maintenance and is therefore in an ephemeral state. Any changes made to this wiki will be lost! Please check back tomorrow.</div>';
- the backup run on hetzner2 finished creating the encrypted archive + the hetnzer1 backup of the wiki files/data finished at both the same time. The backup process on hetzner2 was still rsyncing to dreamhost, but that's fine. I'm going to proceed with bringing up the new wiki on hetzner2.
- Marcin tried to login, but it failed 9because his password was too short)
Incorrect username or password entered. Please try again.
- the only outstanding issue with the wiki is that Marcin isn't getting an email when someone requests an account on the wiki. Note that we did confirm that we can approve them from the wiki, then the user does get an email with their temp password after the account is created. It's just that the initial email notifying Marcin that someone submitted a request to create an account isn't coming through.
- we decided that this isn't a blocker; we're going to mark the migration as a success, and I'll look into the emails out at a later date
- next, I need to setup redirects, email our most active users that they should reset their passwords, and mark all user' passwords as expired
- I tested that I could force my test user to have their password reset upon next login. note that this field is normally 'NULL' for our users
UPDATE wiki_user SET user_password_expires = '19990101000000' WHERE user_name = 'Maltfield2Test';
- Marcin found an issue with modsecurty blocking an ini file, which apparently some "initialization" file type related to 3d printing. It's also a config file, so mod_security blocked it
- fixed by whitelisting rule id 960035
- I also enabled uploading of 'ini' files by adding it to the $wgFileExtensions array in LocalSettings.php
- I updated our 'opensourceecology.org' dns entry to point to our new server = '138.201.84.243'
- before I reset everyone's password, I'm going to kick off a backup. I confirmed that this morning's backup from before the migration finished before I kicked off the post-migration backup.
[root@hetzner2 ~]# bash -c 'source /root/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh hetzner2/*' 15G hetzner2/20180501-072001 51G hetzner2/20180520_072001 53G hetzner2/20180521_072001 50G hetzner2/20180522_072001 56G hetzner2/20180523_072001 43G hetzner2/20180524_072001 57G hetzner2/20180524_142031 [root@hetzner2 ~]#
- I also confirmed that our last good backup of the site on hetzner1 before migration was finished too, but it's too small--it must have omitted our wiki!
[root@hetzner2 ~]# bash -c 'source /root/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh hetzner2/*' 15G hetzner2/20180501-072001 51G hetzner2/20180520_072001 53G hetzner2/20180521_072001 50G hetzner2/20180522_072001 56G hetzner2/20180523_072001 43G hetzner2/20180524_072001 57G hetzner2/20180524_142031 [root@hetzner2 ~]#
- I decided to run the hetzner1 backup again, but I put the docroot back in-place. Nothing is pointed at hetzner1, so we shouldn't actually have any split-brain issues. But to be safe, I created a file to lock the db.
osemain@dedi978:~$ date Fri May 25 01:32:32 CEST 2018 osemain@dedi978:~$ pwd /usr/home/osemain osemain@dedi978:~$ ls public_html/w AdminSettings.sample favicon.ico INSTALL oft read-only-message.20170814.bak serialized tomg.php api.php favicon.png install-utils.inc old.txt read-only-message.20170819.bak site-config tom_merge.php api.php5 files jsduck.json oldusers.html read-only-message.20170826 sitemap.xml trackback.php bin googledf1b3694f1a60c17.html languages oldusers.txt read-only-message.20180301 sitemap.xml.gz trackback.php5 cache Gruntfile.js load.php opensearch_desc.php redirect.php skins Update.php.html composer.json HISTORY load.php5 opensearch_desc.php5 redirect.php5 StartProfiler.php UPGRADE config htaccess-site.txt LocalSettings.php ose-logo.png redirect.phtml StartProfiler.sample wiki.phtml COPYING images LocalSettings.php.update OSE_Proposal_2008.pdf RELEASE-NOTES support.html wp-content CREDITS img_auth.php log.tmp php5.php5 RELEASE-NOTES-1.20 test.php docs img_auth.php5 maintenance profileinfo.php RELEASE-NOTES-1.23 tests error_log includes Makefile profileinfo.php5 RELEASE-NOTES-1.23.1 thumb_handler.php export index.php math README RELEASE-NOTES-1.24 thumb_handler.php5 extensions index.php5 mathjax README.mediawiki resources thumb.php FAQ index-site.php mw-config read-only-message robots-site.txt thumb.php5 osemain@dedi978:~$ cat public_html/w/read-only-message this site is offline. Please see our new site at https://wiki.opensourceecology.org osemain@dedi978:~$
- I updated our let's encrypt certificate to include the naked domain 'opensourceecology.org'
certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/www.opensourceecology.org/htdocs/ -d opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/staging.opensourceecology.org/htdocs -d staging.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d munin.opensourceecology.org /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* nginx -t && service nginx reload
- then I enabled redirects from our naked domain (which now should point to the blog at 'www') back to the wiki when someone uses the old style domain. It matches for anything starting with '/wiki/'. Note that I had to encapsulate the last return in a location block as well to make it work for some reason (otherwise it always redirects to 'www')
[root@hetzner2 conf.d]# date Thu May 24 23:49:21 UTC 2018 [root@hetzner2 conf.d]# pwd /etc/nginx/conf.d [root@hetzner2 conf.d]# head -n 50 www.opensourceecology.org.conf ################################################################################ # File: www.opensourceecology.org.conf # Version: 0.2 # Purpose: Internet-listening web server for truncating https, basic DOS # protection, and passing to varnish cache (varnish then passes to # apache) # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2018-01-03 # Updated: 2018-05-24 ################################################################################ server { # redirect the naked domain to 'www' #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/www.opensourceecology.org/access.log main; #error_log /var/log/nginx/www.opensourceecology.org/error.log main; include conf.d/secure.include; include conf.d/ssl.opensourceecology.org.include; listen 138.201.84.243:443; server_name opensourceecology.org; #################### # REDIRECT TO WIKI # #################### # this is for backwards-compatibility; before 2018-05-24, both the wiki and # this site shared the same domain-name. So, just in case someone sends # opensourceecology.org/wiki/ a query trying to find the wiki, let's send them # to the right place.. location ~* '^/wiki/' { return 301 https://wiki.opensourceecology.org$uri; } # this must be wrapped in a dumb location block or else the above block # does not work *shrug* location / { return 301 https://www.opensourceecology.org$uri; } } server { [root@hetzner2 conf.d]#
- I encountered some issues with the backup process filling up the disk. I believe the issue started when changed the script to encrypt our backups. I had a logic error that prevented our backups from excluding the day before's backups. So it grew fast! Our dreamhost was already approaching 500G again, so I quickly deleted all but the most important recent bacukups from hetzner2
- before
hancock% du -sh hetzner1 72G hetzner1 hancock% du -sh hetzner2 321G hetzner2
- after
hancock% du -sh hetzner1 72G hetzner1 hancock% du -sh hetzner2 108G hetzner2 hancock%
- Thet hetzner2 backup completed.
- reset set the expiration token of everyone's passwords in the past, effectively forcing them to change their passwords on next login attempt
[root@hetzner2 wiki_20180524]# mysql -u "${dbSuperUser_hetzner2}" -p"${dbSuperPass_hetzner2}" osewiki_db ... MariaDB [osewiki_db]> UPDATE wiki_user SET user_password_expires = '19990101000000'; Query OK, 1913 rows affected (0.03 sec) Rows matched: 1914 Changed: 1913 Warnings: 0 MariaDB [osewiki_db]> Bye [root@hetzner2 wiki_20180524]#
- I then sent an email to every user with >5 edits (~500 people)
Hello,
For security reasons, it's imperative that you change your password on the Open Source Ecology wiki. If you re-used your wiki password on any other websites, you should change those accounts' passwords as well.
We just completed a major update to the OSE wiki. For more information on these changes, please see:
* https://wiki.opensourceecology.org/wiki/CHG-2018-05-22
As you can see from the above link, our wiki is located at a new location. And note that it's now using https, so your connection to our wiki is encrypted (before we were using http).
Any communications sent over http (as opposed to https) can be trivially intercepted by a third party. This includes passwords. Therefore, you should assume that any passwords you used at any OSE website before now has been compromised, and those passwords should be retired & replaced wherever you used that password (on OSE sites or elsewhere).
In addition to enabling https, we've made many updates to the wiki. If you have any technical issues with our wiki, please don't hesitate to contact me.
Thank you,Michael Altfield Senior System Administrator PGP Fingerprint: 8A4B 0AF8 162F 3B6A 79B7 70D2 AA3E DF71 60E2 D97B
Open Source Ecology www.opensourceecology.org
- while Marcin worked on wiki validation, I returned to work on the jellybox 3d printer build
- I started ~12:30. Stopped at 16:00
- the orientation of the idler bearings was not specified in the video. Probably someone with more mechanical experience wouldn't fuck this up, but I did. I realized my mistake when putting together the second idler. The correct orientation was still not specified, but the video happened to show it properly.
- note that the jelly box doesn't include a multimeter
- I did find HD photos on their site that do a great job at providing pictoral detail. Now that I think about it, they should probably just splice these into the video, either PIP style in the corner or 1-second overlays. I did a lot of pause-work-play-pause-work action for the videos. And sometimes I had to switch tabs to the documentation online with the HD photos. If I could just pause the 1-second video with a nice HD, annotated shot that would be very helpful.
- the install for the usb cable didn't tell me I needed a nut. Perhaps the usb cable end used to have this built-in, but I definitely needed to add a nut to mine. Also it the bolt wasn't long enough to add a nut, so I had to use an m3x16 instead of the m3x12. This will probably make me short 2x m3x16s. Hopefully I have enough spares..
Tue May 23, 2018
- continued working on fixing the items that Marcin marked as "not working" following the formal wiki validation of the staging site pre-migration
- Marcin said the page "GVCS Tools Status" wouldn't save. I could not reproduce; perhaps they were fixed by the mod_security rules I whitelisted when fixing "Flashy XM" https://wiki.opensourceecology.org/wiki/GVCS_Tools_Status
- Marcin pointed out that the flash embed of a TED talk on the DPV page was missing https://wiki.opensourceecology.org/wiki/Dedicated_Project_Visits
- Flash is dead. And Ted doesn't offer this content over https. I asked Marcin if this is the same TED talk that we show with an iframe to vimeo on our main page If so, that's our best fix.
- Marcin has the same issue on the "Open_Source_Ecology" page. It's also a flash ted, but there's also soe strange cooliris.swf. Even the old link is down. I asked marcin if it should be removed entirely https://wiki.opensourceecology.org/wiki/Open_Source_Ecology
- Marcin pointed out that iframes don't work...but they clearly did for the above vimeo embeds. I asked for clarification.
- Marcin mentioned an issue with the Miracle Orchard Workshop Book section w/ issuu. But it works fine for me. I'll ask for clarification. https://wiki.opensourceecology.org/wiki/Miracle_Orchard_Workshop#Book
- Marcin mentioned that there was an issue embedding eventzilla, but I don't see an example of this failing. I asked for clairificiation.
- Marcin pointed out issues with uploading many file format, such as '.fcstd' freecad files. The test was to download this existing file on the wiki, then try to reupload it. It failed with a mime error https://wiki.opensourceecology.org/wiki/File:Peg_8mm_rods.fcstd
File extension ".fcstd" does not match the detected MIME type of the file (application/zip).
- I read through this document, which describes how mediawiki detects & handles mime types of files https://www.mediawiki.org/wiki/Manual:MIME_type_detection
- for us, the relevant config files on hetzner1 are:
- /usr/home/osemain/public_html/w/includes/mime.types
- /usr/home/osemain/public_html/w/includes/mime.info
- and they changed paths for hetzner2 (probably just because it's a newer version of Mediawiki):
- /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/mime/mime.types
- /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/mime/mime.info
- I couldn't find any reference to fcstd files in either file on hetzner 1. My best guess is that mime type checking wasn't as strict on that older version of mediawiki
osemain@dedi978:~/public_html/w/includes$ grep 'fcstd' mime.* osemain@dedi978:~/public_html/w/includes$
- I checked the mime format of the file in question on hetzner2, and--yes--I got 'application/zip'
[root@hetzner2 mime]# file --mime-type ../../../images/3/32/Peg_8mm_rods.fcstd ../../../images/3/32/Peg_8mm_rods.fcstd: application/zip
- I added a line 'application/zip fcstd' to hetzner2:/var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/mime/mime.types
- Note that there's already a line for this mime type. I confirmed that adding this additional line compliments (as opposed to replaces) the existing line by uploading an actual '.zip' file. Note that 'zip' isn't in our '$wgFileExtensions'--I had to temporarily add it.
- I then also had to add 'fcstd' to the $wgFileExtensions array in LocalSettings.php. Note that 'FCStd' was already there, but there's a case difference. That sucks that this file extension list is case-sensitive :(
- I made similar changes & confirmed that I could upload the following file formats as well
- stf
- rtf
- csv
- xml
- dxf
- I added a line 'application/zip fcstd' to hetzner2:/var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/mime/mime.types
- Marcin & I met at noon to discuss this migration. It looks like most of the issues above are trivial fixes or I already fixed them. We did attempt to register & approve a new user. There seems to me some issues with email mediawiki. But it could just be exceedingly slow. Not sure. In any case, we were able to request an account. Marcin approved the new request & created the account (there's an additional step as approve is redirected to the admin-only "Create Account" Special Page.
- We also accidentally validated that new accounts must be >10 characters & strong. Marcin was driving, and his test user used a short password. Mediawiki refused the password with a nice error that said passwords must be >=10 characters. I told him to just use '1234567890' for the test, which was also rejected because it was a common password (nice!). On our third try, it went through--with a sufficiently long & non-common password.
- Marcin & I met again. he said that everything is working except xml files. His example is being blocked by the wiki's built-in script blocker. Upon consideration, Marcin said xml files aren't a requirement; we can just nowiki them.
- We had some intermittent issues with email coming from the server to the new user. Our solution was just to set $wgEmailConfirmToEdit to 'false'. So now someone can edit pages without having to click the confirmation email sent from our server. The risk here is that someone can use a bogus email, but they still have to be approved by Marcin. Not a big issue.
- We decided to push back the migration to tomorrow, starting ~9am CT.
- Marcin & I went to add spare steel roofing on top of the old workshop's carpet sandwich roof
- Marcin & I spent some time repairing the micro track
- Based on a conversation Marcin & I had earlier about my experimentation building ball mills to grind aluminum down to a fine powder in my youth + the utility that plays for 3d printing via laser sintering, I did some research on this & added some comments to the Open Source Ball Mill page
Mon May 22, 2018
- created a page for the imade3d jellbox 3d printer on the wiki Imade3d
- created a page for documenting my experience building the Jellybox 1.3 3d printer Jellybox_1.3_Build_2018-05
- I emailed Meetecho asking for a quote for a webrtc SaaS solution for a 2-day workshop
Hello,
We're looking for a solution that will allow us to have ~ 1-12 remote speakers present a workshop to ~10-100 remote viewers.
Traditional P2P WebRTC doesn't scale for our needs, so we were looking at a SFU. In order to run a Janus gateway ourselves, we'd need to purchase an additional server. But we only do workshops like this a few times per year, so we can't justify the purchase of a 24/7 server.
Could you please provide us with a quote for how much it would cost for Meetecho to provide us with a platform (hosted by Meetecho) where we could have several remote speakers presenting to about a hundred participants? It would be essential that the participants can ask real-time questions by typing. It would be a huge plus if they could digitally "raise their hand" and we could temporarily escalate their participant status to become a "publisher" so they can ask a question using RTC mic + webcam (visible to all), and then (after the question has been asked), we could demote them from publisher back to subscriber.
Do you offer something like this? How much would it cost us for a 2-day workshop?
Thank you, Michael Altfield </blockqutoe>
- The wiki migration was scheduled for today, but we haven't finished validation. To be safe, we're pushing it back to tomorrow. I updated the change page CHG-2018-05-22 and the banner message at the top of the site
- I spent another 30 minutes building the 3d printer bed
- the vice grip supplied is super cheap & doesn't clamp evenly
- I finished the Y-assembley, so next step is "10_Quadruple"
- Catarina's internet is latent & slow. It's provided by Satellite for $70/month. It has a max bandwidth/month cap that she hits ~1/2 way through the month. I spoke with Catarina about linking Marcin's Microhouse with the Seedhouse.
- Marcin's microhouse has lower latency & generally faster speeds. It works better for a VPN thatn the sattelite conntection. And it's cheaper. But their provider (and it's a monopoly, of course) is unreliable--loosing internet for days is not uncommon. The sattelite connection, ironically, has higher availability.
- The distance between the seedhome and the microhome is 1000 ft. The max recommended distance for cat5 is 100 meters = 300 ft.
- The satellite plan at the seedhome is also locked-in for 2 years.
- c'est la vie. it is what it is. long term solution: open source satellites. But slightly less long term: trench a conduit line between the structures and drop two lines of fiber optics in each.
- I fixed a modsecurity issue with adding the following block to https://www.opensourceecology.org/wp-admin/post.php?post=462&action=edit
<em><a href="https://www.opensourceecology.org/wp-content/uploads/2009/01/1-basic-building.jpg"><img src="https://www.opensourceecology.org/wp-content/uploads/2009/01/1-basic-building.jpg" /></a></em>
- 973324 IE XSS
- 973317 IE XSS
- changed Marcin's password again for the ephemeral wiki so it's >20 characters. I sent it to him on wire. Now he can proceed with the validation.
- I began searching on hetzner1 to see if we had any copy of our old wp-content/upload files pre-dating 2012. Unfortunaetly, the live site I migrated from also only has uploaded files back from 2012
osemain@dedi978:~/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads$ date Tue May 22 23:23:32 CEST 2018 osemain@dedi978:~/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads$ pwd /usr/home/osemain/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads osemain@dedi978:~/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads$ ls -lah total 36K drwxr-xr-x 9 osemain users 4.0K Jan 1 01:11 . drwxr-xr-x 8 osemain osemain 4.0K Mar 2 03:59 .. drwxr-xr-x 6 osemain users 4.0K Feb 14 2014 2012 drwxr-xr-x 10 osemain users 4.0K Feb 17 2014 2013 drwxr-xr-x 13 osemain users 4.0K Dec 1 2014 2014 drwxr-xr-x 14 osemain users 4.0K Dec 1 2015 2015 drwxr-xr-x 14 osemain users 4.0K Dec 1 2016 2016 drwxr-xr-x 14 osemain users 4.0K Dec 1 01:05 2017 drwxr-xr-x 5 osemain users 4.0K Mar 1 01:03 2018 osemain@dedi978:~/noBackup/deleteMeIn2019/osemain_olddocroot/wp-content/uploads$
- I found a backup that was apparently made in 2017 by someone ~6 months before I became an OSE dev. I confirmed that it also only has uploads starting in 2012.
osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ date Tue May 22 23:41:16 CEST 2018 osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ pwd /usr/home/osemain/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ ls wp-activate.php wp-blog-header.php wp-config-sample.php wp-includes wp-login.php wp-signup.php wp-admin wp-comments-post.php wp-content wp-links-opml.php wp-mail.php wp-trackback.php wp-backup_1-17-2017.tar.gz wp-config.php wp-cron.php wp-load.php wp-settings.php wp-xmlrpc.php osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$ ls -lah wp-content/uploads/ total 32K drwxr-xr-x 8 osemain osemain 4.0K Jan 1 2017 . drwxr-xr-x 8 osemain osemain 4.0K Jan 18 2017 .. drwxr-xr-x 6 osemain osemain 4.0K Feb 14 2014 2012 drwxr-xr-x 10 osemain osemain 4.0K Feb 17 2014 2013 drwxr-xr-x 13 osemain osemain 4.0K Dec 1 2014 2014 drwxr-xr-x 14 osemain osemain 4.0K Dec 1 2015 2015 drwxr-xr-x 14 osemain osemain 4.0K Dec 1 2016 2016 drwxr-xr-x 3 osemain osemain 4.0K Jan 1 2017 2017 osemain@dedi978:~/noBackup/deleteMeIn2018/usr/home/osemain/public_html/wp-backup_tarball$
- I also checked the old hetzner1 cron jobs, but there was only one to create an xml backup of the wiki. Nothing existed for wordpress
- I emailed Elifarley asking if he may know of some old backups we may have of the wordpress site from before 2011
- I did some more digging, and I did find the images on a distinct user on the same machine at hetzner1:/usr/www/users/oseblog/wp-content/uploads/
oseblog@dedi978:~/.ssh$ date Wed May 23 07:53:45 CEST 2018 oseblog@dedi978:~/.ssh$ ls -lah /usr/www/users/oseblog/wp-content/uploads/ total 112K drwxrwxrwx 14 oseblog oseblog 4.0K Jan 1 2014 . drwxr-xr-x 20 oseblog oseblog 4.0K Nov 30 2014 .. drwxrwxrwx 5 oseblog oseblog 4.0K Jan 18 2010 2007 drwxrwxrwx 14 oseblog oseblog 4.0K Jan 18 2010 2008 drwxrwxrwx 14 oseblog oseblog 4.0K Jan 18 2010 2009 drwxrwxrwx 14 oseblog oseblog 4.0K Dec 18 2010 2010 drwxrwxrwx 14 oseblog oseblog 4.0K Dec 3 2011 2011 drwxrwxrwx 14 oseblog users 4.0K Dec 1 2012 2012 drwxrwxrwx 14 oseblog users 4.0K Dec 1 2013 2013 drwxrwxrwx 14 oseblog users 4.0K Dec 1 2014 2014 drwxrwxrwx 11 oseblog oseblog 4.0K Sep 15 2010 avatars drwxrwxrwx 3 oseblog oseblog 4.0K Aug 16 2010 group-avatars -rw-r--r-- 1 oseblog oseblog 94 Aug 8 2011 .htaccess drwxrwxrwx 2 oseblog users 24K Nov 1 2011 img --w------- 1 oseblog users 25K Jul 26 2011 lib.php drwxrwxrwx 2 oseblog oseblog 4.0K Aug 21 2010 podpress_temp -rw-r--r-- 1 oseblog users 122 Sep 11 2012 system.php oseblog@dedi978:~/.ssh$
- I did another 30 minutes on the jellybox
- The number of zipties was off from the video & my version of the jellybox. Not a big deal.
- I finished part 10, 11, & 12. Next up: 13: Wiring
- Operated the Auger for Marcin & Catarina to plant grapes & evergreen trees at Seedhome. The Auger broke just before the last couple holes. A plastic piece to stabilize the handle fell of (the two bolts were loosened & fell out. One was found, the other was not). And the electrical switch for turning it off stopped working. Marcin squeezed the gas line to make the engine stall.
- Made a big batch of hummus
- Marcin got back to me on the wiki validation, and he found a ton of issues
- One of the first issues was he couldn't edit the 'Flashy XM' wiki article due to a mod_security 403 Forbidden. I fixed it by whitelisting the following rules
- 973337 XSS
- 958018 XSS
- 958407 XSS
- 958039 XSS
- 973303 XSS
- 973315 XSS
- 973346 XSS
- 973321 XSS
Mon May 21, 2018
- published log, updated hours
- Marcin sent me an email about missing pictures from a post in 2008 from blog.opensourcecology.org, but we didn't have any uploads before 2011
[root@hetzner2 uploads]# pwd /var/www/html/www.opensourceecology.org/htdocs/wp-content/uploads [root@hetzner2 uploads]# ls 2012 2013 2014 2015 2016 2017 2018 [root@hetzner2 uploads]# I did a search for that image in all the years, and nothing resulted. [root@hetzner2 uploads]# find . | grep -i 'basic-building' [root@hetzner2 uploads]#
- Marcin said we should check old backups for this image data
- other email follow-up
- attempted to build IMade3D Jellybox. I have never used nor operated a 3d printer. I've only seen & read about them. Now, I attempted to build this machine by myself
- I spent 3.5 hours on this today build today.
- the box didn't actually include instructions; it included a few pages that said "go here on our website to view the instructions." So I mostly ended up watching videos on youtube that break apart the steps
- the videos didn't tell me which box everything was in. The first task was to install thread lock on set scrws for the puller motors. Where is the thread lock? Does this kit even include it? Finally, it was found in the Tools box. Once you open each box, it has a checklist with the items inside that box. It would be great, however, if the instructions (either the ones printed or the ones found online) iterated all the boxes and specified what each box contains. And then, for each step, it mentioned "you'll need X part, which can be found in the Y box"
- some of the items were actually already built, so I just dumbly watched a video describing how to build something that was already built. Okay..
- putting the "bird on the branch" was brilliant. Even though I didn't have to do this; it was already built!
- the kit didn't include a phillips head screwdriver, which is necessary for installing the fan. not a big deal, but worth noting
- the wires were already run in the box. That was nice.
- in general, having videos is great. If a picture says a thousand words, then a video says a thousand pictures (well, ~30 pictures per second). Being able to see the thing oriented rather than trying to figure out what is the right vs left side is critical.
- that said, the videos on youtube were just 360p. This should really, really be high def. If I want to freeze frame to see which small hole a bolt is going though, it's necessary to have high def so I get a detailed picture instead of a blur.
- I met with Marcin to set him up with an ext3 backup usb drive that contains a 20M veracrypt file container. This encrypted volume contains:
- a backup of his pgp key (I created a new pgp key for him yesterday + revoked his old key from earlier this year as he forgot the password)
- a backup of his ssh key (I created a new ssh key for him yesterday)
- a backup of his personal keepass file (We created a new keepass for him yesterday)
- a backup of our shared ose keepass, which lives on the ose server. He also has a backup of the keyfile used to decrypt our shared ose keepass on this drive as well.
- a redundant backup of our key file used to encrypt our server backups
- Marcin's laptop now has ssh access to hetzner2 just by typing `ssh www.opensourceecology.org`. There's a '$HOME/.ssh/config' file that sets the port correctly.
- Marcin has been trained on how to access our shared ose keepass remotely using sshfs.
Sun May 20, 2018
- I arrived to FeF! I finally met Marcin & Catarina in-person and got a tour of the workshop, seedhomes, etc.
- My purpose for this visit is primarily driven by the need to ensure that Marcin has access to our live keepass (which necessitates having ssh access to our server) and as well as a local copy of his ssh key, personal keepass, and our shared ose keepass. We may also migrate the wiki tomorrow, depending on the status of its validation
Fri May 18, 2018
- I consolidated all our modsecurity whitelists from all our wordpress sites into one long, numerically sorted list, then I added this to all our wordpress sites' vhost configs. This will prevent false-positive 403 issues that have been fixed on one wordpress site from cropping up on another. It's not ideal, but it's a pragmatic compromise.
[root@hetzner2 conf.d]# date Fri May 18 14:29:10 UTC 2018 [root@hetzner2 conf.d]# pwd /etc/httpd/conf.d [root@hetzner2 conf.d]# grep -A 3 '<LocationMatch "/(wp-admin|ose-hidden-login)/">' * | grep 'SecRuleRemoveById' | tr " " "\n" | sort -un | grep -vi SecRuleRemoveById | tr "\n" " " 200003 200004 950001 950109 950120 950901 958008 958030 958051 958056 958057 959070 959072 959073 960015 960017 960020 960024 960335 960904 960915 970901 973300 973301 973304 973306 973316 973327 973329 973330 973331 973332 973333 973334 973335 973336 973337 973338 973344 973347 981172 981173 981231 981240 981242 981243 981244 981245 981246 981248 981253 981257 981317 981318 981319 [root@hetzner2 conf.d]# [root@hetzner2 conf.d]#
- this was applied to the following vhost files
[root@hetzner2 conf.d]# grep -irl '<LocationMatch "/(wp-admin|ose-hidden-login)/">' * 000-www.opensourceecology.org.conf 00-fef.opensourceecology.org.conf 00-oswh.opensourceecology.org.conf 00-seedhome.openbuildinginstitute.org.conf 00-www.openbuildinginstitute.org.conf staging.opensourceecology.org.conf [root@hetzner2 conf.d]#
- for records, here is what the files had before the change
[root@hetzner2 conf.d]# grep -irA 3 '<LocationMatch "/(wp-admin|ose-hidden-login)/">' * | grep 'SecRuleRemoveById' 000-www.opensourceecology.org.conf- SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 958057 958056 973327 973337 950001 973336 958051 973331 973330 959070 958008 973329 960024 00-fef.opensourceecology.org.conf- SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 960915 200003 00-oswh.opensourceecology.org.conf- SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 00-seedhome.openbuildinginstitute.org.conf- SecRuleRemoveById 960015 981173 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 00-www.openbuildinginstitute.org.conf- SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 981318 staging.opensourceecology.org.conf- SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 958057 958056 973327 973337 950001 973336 958051 973331 973330 959070 [root@hetzner2 conf.d]#
- I actually just wrapped up this new mod_security whitelist rules into a new file at /etc/httpd/conf.d/mod_security.wordpress.include . This way, when we add one, we add it to all sites.
- I intentionally did not do this with the other common wordpress blocks, such as blocking of '.git' dirs, blocking 'wp-login.php', etc as I don't want someone to comment-out the include in attempt to debug a mod_security issue, and suddenly disable these other critical security blocks which never false-positive like mod_security. Also, this mod_security stuff actually needs to be updated so the include file helps. The other stuff is essentially static.
- I also added a block that prevents files from being executed by php that have been placed into the uploads dir
# don't execute any php files inside the uploads directory <LocationMatch "/wp-content/uploads/"> php_flag engine off </LocationMatch> <LocationMatch "/wp-content/uploads/.*(?i)\.(cgi|shtml|php3?|phps|phtml)$"> Order Deny,Allow Deny from All </LocationMatch>
- Marcin said we should migrate the wiki Tuesday pending validation.
- I spent some time formally documenting all the wiki changes here http://opensourceecology.org/wiki/CHG-2018-05-22
- I added a banner notice message to the prod wiki site with "$wgSiteNotice" to inform our users of the upcoming maintenance window, and I linked to the CHG above
- Marcin sent me another 403 forbidden false-positive. I whitelisted 950907 = "generic" / "system command injection" attack and asked him to try again
- attempted to update the "3d printer workshop" page and I immediately got some modsecurity false-positives, which I whitelisted
- 981256 sqli
- 981249 sqli
- Marcin sent me another string that was triggering modsec false-positives. The fix was to whitliest these rules:
- 958413 xss
Thr May 17, 2018
- Marcin forwarded me a security alert from Dreamhost that our server had been sending spam from the 'ose_marcin' account. Note that recently we got an alert from them about the 'ose_community' account on that server, which had been running drupal. I changed that user's password & shutdown the vhost already. Now this is a distinct account! But trying to investigate this damn incident on a shared server without root is like trying to weld with a soldering iron. I sent them an email asking many questions & for more information about what happened.
Wed May 16, 2018
- My request to join the meetecho-janus google group yesterday was approved
- I posted a thread to the meetecho-janus google group asking for janus security best-practices https://groups.google.com/forum/#!topic/meetecho-janus/0Vx_Vl0hmwU
- I updated my git issue. Lorenzo updated their site, and I was able to confirm that the issue occurs there too. https://github.com/meetecho/janus-gateway/issues/1233
- I tried to research ICE hardening, but again the searches for security around webrtc lead to marketing guides talking about how secure it is for the client
- while I wait for a response from the janus community on my hardening thread, I began to research how we can administer the videoroom. Specifically, we need to be able to select which participants can become a publisher instead of just a subscriber. This is something OpenTok does well for clients like the MLB, but that shit ain't open.
- there are some janus configuration options regarding authentication with the api, but that appears to be all-or-nothing auth. There doesn't appear to be anything that would specifically allow a subscriber to escalate themselves to becoming a producer. https://janus.conf.meetecho.com/docs/auth.html
- found this which says "you can control who can join, but you can not control his activities after join." https://groups.google.com/forum/#!searchin/meetecho-janus/videoroom$20publish$20authentication%7Csort:date/meetecho-janus/TJivBoiOXA0/KaqrfKx0AwAJ
- so we may have to write a modified version of the videoroom.
- or we can just password protect the whole videoroom, and then just capture the videoroom somehow and rebroadcast it through another subscribe-only channel similar to how youtube live works.
- I posted this question in all of its ignorance here; we'll see what happens.. https://janus.conf.meetecho.com/docs/auth.html
Tue May 15, 2018
- the main dev behind Janus is Lorenzo Miniero, and he's the one that responded to my git issue in <5 minutes. I found an interview with him about the Open Source Janus Gateway here after reviewing his LInkedIn https://www.linkedin.com/pulse/meet-meetecho-janus-gateway-fabian-bernhard
- he also came from The University of Naples Federico II in Naples, Italy--which is where a lot of these WebRTC experts appear to have originated..
- I applied to write messages on the 'meetecho-janus' mailing list. after I'm approved, I'll ask the community if there's any guides on how to harden Janus' configuration or security best-practices. For example, file permissions, hardened configuration options for the each of the config files (main, transport, plugins, etc)
Mon May 14, 2018
- continuing to debug why jangouts' text chat didn't work. The text room demo in janus also failed, and it said that it sent the data with "data-channels"
- I could not find out what the initial configure options were when I compiled janus (`janus --version` doesn't list it)
- I tried to reconfigure janus, this time explicitly setting '--enable-data-channels'. It failed with an error from libusrsctp.
[root@ip-172-31-28-115 janus-gateway]# ./configure --enable-data-channels ... checking for srtp_crypto_policy_set_aes_gcm_256_16_auth in -lsrtp2... yes checking for usrsctp_finish in -lusrsctp... no configure: error: libusrsctp not found. See README.md for installation instructions or use --disable-data-channels [root@ip-172-31-28-115 janus-gateway]#
- the main janus gateway git README explicitly lists usrsctp as a dependency, stating "(only needed if you are interested in Data Channels)". It links to the usrsctp github here https://github.com/sctplab/usrsctp
- the usrsctp github doesn't provide instructions for centos7. It states that it's tested for FreeBSD, Ubuntu, Windows, & Mac.
- I attempted to compile it manually, as internet searches suggested that it's not in any yum repo.
pushd /root/sandbox git clone https://github.com/sctplab/usrsctp pushd usrsctp ./bootstrap ./configure && make && sudo make install popd popd
- trying to reconfigure lists "DataChannels support: yes", which I confirmed was previously "no"
config.status: executing libtool commands libsrtp version: 2.x SSL/crypto library: OpenSSL DTLS set-timeout: not available DataChannels support: yes Recordings post-processor: no TURN REST API client: yes Doxygen documentation: no Transports: REST (HTTP/HTTPS): yes WebSockets: no RabbitMQ: no MQTT: no Unix Sockets: yes Plugins: Echo Test: yes Streaming: yes Video Call: yes SIP Gateway (Sofia): no SIP Gateway (libre): no NoSIP (RTP Bridge): yes Audio Bridge: no Video Room: yes Voice Mail: no Record&Play: yes Text Room: yes Lua Interpreter: yes Event handlers: Sample event handler: yes RabbitMQ event handler:no JavaScript modules: no If this configuration is ok for you, do a 'make' to start building Janus. A 'make install' will install Janus and its plugins to the specified prefix. Finally, a 'make configs' will install some sample configuration files too (something you'll only want to do the first time, though). [root@ip-172-31-28-115 janus-gateway]#
- I recompiled & deployed with `make && make install`, then restarted janus. Unfortunately, I have the same issue
Session: 8126727588102204 Handle: 3978092954853617 Processing POST data (application/json) (310 bytes)... [transports/janus_http.c:janus_http_handler:1248] -- Data we have now (310 bytes) [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/8126727588102204/3978092954853617... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 8126727588102204 Handle: 3978092954853617 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"message","body":{"request":"ack"},"transaction":"RvFOm1M7roLf","jsep":{"type":"answer","sdp":"v=0\r\no=- 6893769308065182494 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=msid-semantic: WMS\r\nm=application 0 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"}} Forwarding request to the core (0x7f00ec000ca0) Got a Janus API request from janus.transport.http (0x7f00ec000ca0) Transport task pool, serving request [3978092954853617] There's a message for JANUS TextRoom plugin [3978092954853617] Remote SDP: v=0 o=- 6893769308065182494 2 IN IP4 127.0.0.1 s=- t=0 0 a=msid-semantic: WMS m=application 0 DTLS/SCTP 5000 c=IN IP4 0.0.0.0 a=mid:data a=sctpmap:5000 webrtc-datachannel 1024 [3978092954853617] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have NOT been negotiated [WARN] [3978092954853617] Skipping disabled/unsupported media line... [ERR] [janus.c:janus_process_incoming_request:1193] Error processing SDP [RvFOm1M7roLf] Returning Janus API error 465 (Error processing SDP)
- I stumbled on yet another open source webrtc SFU based on node = Mediasoup https://mediasoup.org/about/
- I also found a formal description of SFUs in RFC7667 https://tools.ietf.org/html/rfc7667#section-3.7
- doh! It looks like my "./configure" today didn't have the "--prefix /opt/janus" as I used per the README in their github, so my test above was using the old version https://github.com/meetecho/janus-gateway
[root@ip-172-31-28-115 janus-gateway]# LD_LIBRARY_PATH=/usr/lib && /opt/janus/bin/janus --version Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd Compiled on: Fri May 4 00:11:11 UTC 2018 janus 0.4.0 [root@ip-172-31-28-115 janus-gateway]# LD_LIBRARY_PATH=/usr/local/lib && janus --version Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd Compiled on: Mon May 14 14:25:03 UTC 2018 janus 0.4.0 [root@ip-172-31-28-115 janus-gateway]# which janus /usr/local/bin/janus [root@ip-172-31-28-115 janus-gateway]#
- I did the compile again, and here's the result
[root@ip-172-31-28-115 janus-gateway]# LD_LIBRARY_PATH=/usr/lib && /opt/janus/bin/janus --version Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd Compiled on: Mon May 14 15:45:32 UTC 2018 janus 0.4.0 [root@ip-172-31-28-115 janus-gateway]#
- I had issues starting janus, which were resolved by adding '/usr/local/lib' to '/etc/ld.so.conf.d/janus.conf' and running `ldconfig`
- unfortunately, I have the same issue. note that the sdp message is distinct in chrome & firefox
- here's the sdp message in chromium (per the janus server logs on highest verbosity)
Session: 6375072996036015 Handle: 1600250370708259 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"message","body":{"request":"ack"},"transaction":"LMu5bjOxNA1q","jsep":{"type":"answer","sdp":"v=0\r\no=- 8310479853867794458 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=msid-semantic: WMS\r\nm=application 0 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"}} Forwarding request to the core (0x7f1538000f80) Got a Janus API request from janus.transport.http (0x7f1538000f80) Transport task pool, serving request [1600250370708259] There's a message for JANUS TextRoom plugin [1600250370708259] Remote SDP: v=0 o=- 8310479853867794458 2 IN IP4 127.0.0.1 s=- t=0 0 a=msid-semantic: WMS m=application 0 DTLS/SCTP 5000 c=IN IP4 0.0.0.0 a=mid:data a=sctpmap:5000 webrtc-datachannel 1024 [1600250370708259] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have NOT been negotiated [WARN] [1600250370708259] Skipping disabled/unsupported media line... [ERR] [janus.c:janus_process_incoming_request:1193] Error processing SDP [LMu5bjOxNA1q] Returning Janus API error 465 (Error processing SDP)
- and here's the same thing when the client running the textroomtest demo is firefox instead
Session: 654029176767371 Handle: 6994444633419195 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"message","body":{"request":"ack"},"transaction":"HtiY4UW9UZDF","jsep":{"type":"answer","sdp":"v=0\r\no=mozilla...THIS_IS_SDPARTA-50.1.0 4746781219317630708 0 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 CC:E1:78:4E:53:A6:A7:9F:DB:06:B4:4C:68:E8:FB:8B:B3:C7:56:C8:8D:B8:F0:A8:B4:5F:E4:45:FF:1B:39:7B\r\na=group:BUNDLE\r\na=ice-options:trickle\r\na=msid-semantic:WMS *\r\nm=application 0 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=inactive\r\na=sctpmap:5000 rejected 0\r\n"}} Forwarding request to the core (0x7f155001b920) Got a Janus API request from janus.transport.http (0x7f155001b920) Transport task pool, serving request [6994444633419195] There's a message for JANUS TextRoom plugin [6994444633419195] Remote SDP: v=0 o=mozilla...THIS_IS_SDPARTA-50.1.0 4746781219317630708 0 IN IP4 0.0.0.0 s=- t=0 0 a=fingerprint:sha-256 CC:E1:78:4E:53:A6:A7:9F:DB:06:B4:4C:68:E8:FB:8B:B3:C7:56:C8:8D:B8:F0:A8:B4:5F:E4:45:FF:1B:39:7B a=group:BUNDLE a=ice-options:trickle a=msid-semantic:WMS * m=application 0 DTLS/SCTP 5000 c=IN IP4 0.0.0.0 a=inactive a=sctpmap:5000 rejected 0 [6994444633419195] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have NOT been negotiated [6994444633419195] Fingerprint (global) : sha-256 CC:E1:78:4E:53:A6:A7:9F:DB:06:B4:4C:68:E8:FB:8B:B3:C7:56:C8:8D:B8:F0:A8:B4:5F:E4:45:FF:1B:39:7B [WARN] [6994444633419195] Skipping disabled/unsupported media line... [ERR] [janus.c:janus_process_incoming_request:1193] Error processing SDP [HtiY4UW9UZDF] Returning Janus API error 465 (Error processing SDP)
- well, when starting janus, there is a warning stating that Data Channels support is *not* compiled
[root@ip-172-31-28-115 janus-gateway]# /opt/janus/bin/janus ... [WARN] The libsrtp installation does not support AES-GCM profiles Fingerprint of our certificate: D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38 [WARN] Data Channels support not compiled [WARN] Event handlers support disabled Plugins folder: /opt/janus/lib/janus/plugins Loading plugin 'libjanus_recordplay.so'...
- ugh, I forgot `make clean` before the `make && make install`. adding that step got me much further! When I loaded the text room, it prompted me for my username (before it just hung indefinitely). Unfortunately, after this popped-up, I got a notification in the browser that we lost connection to the janus gateway. Hopping back to the server, I saw a Segmentation Fault :(
Session: 4989268396723854 Handle: 45723605327998 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"message","body":{"request":"ack"},"transaction":"9tdbOIEVuv9q","jsep":{"type":"answer","sdp":"v=0\r\no=- 8019385961591100028 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE data\r\na=msid-semantic: WMS\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\nb=AS:30\r\na=ice-ufrag:MNDb\r\na=ice-pwd:8F39sum8obXhdVgCLhNhUVLo\r\na=fingerprint:sha-256 D5:D6:25:60:4D:24:9A:37:79:55:4C:B2:F4:99:B0:69:DE:A5:F4:F0:4C:72:CD:67:5C:0F:A9:17:BB:E1:FC:00\r\na=setup:active\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"}} Forwarding request to the core (0x7fa31c004620) Got a Janus API request from janus.transport.http (0x7fa31c004620) Transport task pool, serving request [45723605327998] There's a message for JANUS TextRoom plugin [45723605327998] Remote SDP: v=0 o=- 8019385961591100028 2 IN IP4 127.0.0.1 s=- t=0 0 a=group:BUNDLE data a=msid-semantic: WMS m=application 9 DTLS/SCTP 5000 c=IN IP4 0.0.0.0 b=AS:30 a=ice-ufrag:MNDb a=ice-pwd:8F39sum8obXhdVgCLhNhUVLo a=fingerprint:sha-256 D5:D6:25:60:4D:24:9A:37:79:55:4C:B2:F4:99:B0:69:DE:A5:F4:F0:4C:72:CD:67:5C:0F:A9:17:BB:E1:FC:00 a=setup:active a=mid:data a=sctpmap:5000 webrtc-datachannel 1024 [45723605327998] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have been negotiated [45723605327998] Parsing SCTP candidates (stream=1)... [45723605327998] ICE ufrag (local): MNDb [45723605327998] ICE pwd (local): 8F39sum8obXhdVgCLhNhUVLo [45723605327998] Fingerprint (local) : sha-256 D5:D6:25:60:4D:24:9A:37:79:55:4C:B2:F4:99:B0:69:DE:A5:F4:F0:4C:72:CD:67:5C:0F:A9:17:BB:E1:FC:00 [45723605327998] DTLS setup (local): active [45723605327998] Setting accept state (DTLS server) [45723605327998] Data Channel mid: data Got a sctpmap attribute: 5000 webrtc-datachannel 1024 [45723605327998] -- ICE Trickling is supported by the browser, waiting for remote candidates... ------------------------------------------- >> Anonymized ------------------------------------------- Creating plugin result... Sending Janus API response to janus.transport.http (0x7fa31c004620) Got a Janus API response to send (0x7fa31c004620) Destroying plugin result... [45723605327998] Sending event to transport... Sending event to janus.transport.http (0x7fa31c003ce0) Got a Janus API event to send (0x7fa31c003ce0) >> Pushing event: 0 (Success) [transports/janus_http.c:janus_http_handler:1137] Got a HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Content-Length: 227 [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] content-type: application/json [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 4989268396723854 Handle: 45723605327998 Processing POST data (application/json) (227 bytes)... [transports/janus_http.c:janus_http_handler:1248] -- Data we have now (227 bytes) [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 4989268396723854 Handle: 45723605327998 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"trickle","candidate":{"candidate":"candidate:201398067 1 udp 2122260223 10.137.2.17 46853 typ host generation 0 ufrag MNDb network-id 1 network-cost 50","sdpMid":"data","sdpMLineIndex":0},"transaction":"JpDJKwdL9Rj4"} Forwarding request to the core (0x7fa30c014790) Got a Janus API request from janus.transport.http (0x7fa30c014790) [45723605327998] Trickle candidate (data): candidate:201398067 1 udp 2122260223 10.137.2.17 46853 typ host generation 0 ufrag MNDb network-id 1 network-cost 50 [45723605327998] Adding remote candidate component:1 stream:1 type:host 10.137.2.17:46853 [45723605327998] Candidate added to the list! (1 elements for 1/1) [45723605327998] ICE already started for this component, setting candidates we have up to now [45723605327998] ## Setting remote candidates: stream 1, component 1 (1 in the list) [45723605327998] >> Remote Stream #1, Component #1 [45723605327998] Address: 10.137.2.17:46853 [45723605327998] Priority: 2122260223 [45723605327998] Foundation: 201398067 [45723605327998] Username: MNDb [45723605327998] Password: 8F39sum8obXhdVgCLhNhUVLo [45723605327998] Setting remote credentials... [45723605327998] Component state changed for component 1 in stream 1: 2 (connecting) [45723605327998] Discovered new remote candidate for component 1 in stream 1: foundation=1 [45723605327998] Stream #1, Component #1 [45723605327998] Address: 66.18.33.130:41785 [45723605327998] Priority: 1853824767 [45723605327998] Foundation: 1 [45723605327998] Remote candidates set! Sending Janus API response to janus.transport.http (0x7fa30c014790) Got a Janus API response to send (0x7fa30c014790) New connection on REST API: ::ffff:66.18.33.130 [transports/janus_http.c:janus_http_handler:1137] Got a HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Content-Length: 79 [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] content-type: application/json [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 4989268396723854 Handle: 45723605327998 Processing POST data (application/json) (79 bytes)... [transports/janus_http.c:janus_http_handler:1248] -- Data we have now (79 bytes) [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 4989268396723854 Handle: 45723605327998 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"trickle","candidate":{"completed":true},"transaction":"xVqDncVePyih"} Forwarding request to the core (0x7fa30c014790) Got a Janus API request from janus.transport.http (0x7fa30c014790) [transports/janus_http.c:janus_http_handler:1137] Got a HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Content-Length: 260 [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] content-type: application/json [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 4989268396723854 Handle: 45723605327998 Processing POST data (application/json) (260 bytes)... [transports/janus_http.c:janus_http_handler:1248] -- Data we have now (260 bytes) [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/4989268396723854/45723605327998... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 4989268396723854 Handle: 45723605327998 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"trickle","candidate":{"candidate":"candidate:2774440166 1 udp 1686052607 66.18.33.130 41785 typ srflx raddr 10.137.2.17 rport 46853 generation 0 ufrag MNDb network-id 1 network-cost 50","sdpMid":"data","sdpMLineIndex":0},"transaction":"RByDZe9bnARf"} Forwarding request to the core (0x7fa31c003940) Got a Janus API request from janus.transport.http (0x7fa31c003940) No more remote candidates for handle 45723605327998! Sending Janus API response to janus.transport.http (0x7fa30c014790) Got a Janus API response to send (0x7fa30c014790) [45723605327998] Trickle candidate (data): candidate:2774440166 1 udp 1686052607 66.18.33.130 41785 typ srflx raddr 10.137.2.17 rport 46853 generation 0 ufrag MNDb network-id 1 network-cost 50 [45723605327998] Adding remote candidate component:1 stream:1 type:srflx 10.137.2.17:46853 --> 66.18.33.130:41785 [45723605327998] Candidate added to the list! (2 elements for 1/1) [45723605327998] Trickle candidate added! Sending Janus API response to janus.transport.http (0x7fa31c003940) Got a Janus API response to send (0x7fa31c003940) [45723605327998] Looks like DTLS! [45723605327998] Component state changed for component 1 in stream 1: 3 (connected) [45723605327998] ICE send thread started...; 0x7fa2fc015190 [45723605327998] Looks like DTLS! New connection on REST API: ::ffff:66.18.33.130 [45723605327998] New selected pair for component 1 in stream 1: 1 <-> 2774440166 [45723605327998] Component is ready enough, starting DTLS handshake... janus_dtls_bio_filter_ctrl: 50 janus_dtls_bio_filter_ctrl: 6 janus_dtls_bio_filter_ctrl: 50 [45723605327998] Creating retransmission timer with ID 4 [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/4989268396723854... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/4989268396723854... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 4989268396723854 Got a Janus API request from janus.transport.http (0x7fa30c014790) Session 4989268396723854 found... returning up to 1 messages Got a keep-alive on session 4989268396723854 Sending Janus API response to janus.transport.http (0x7fa30c014790) Got a Janus API response to send (0x7fa30c014790) New connection on REST API: ::ffff:66.18.33.130 [45723605327998] Looks like DTLS! [45723605327998] Written 156 bytes on the read BIO... janus_dtls_bio_filter_ctrl: 50 janus_dtls_bio_filter_ctrl: 49 Advertizing MTU: 1200 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_write: 0x7fa31c051e00, 1107 -- 1107 New list length: 1 janus_dtls_bio_filter_ctrl: 50 [45723605327998] ... and read -1 of them from SSL... [45723605327998] >> Going to send DTLS data: 1107 bytes [45723605327998] >> >> Read 1107 bytes from the write_BIO... [45723605327998] >> >> ... and sent 1107 of those bytes on the socket [45723605327998] Initialization not finished yet... [45723605327998] DTLSv1_get_timeout: 968 [45723605327998] DTLSv1_get_timeout: 918 [45723605327998] Looks like DTLS! [45723605327998] Written 591 bytes on the read BIO... janus_dtls_bio_filter_ctrl: 50 janus_dtls_bio_filter_ctrl: 51 janus_dtls_bio_filter_ctrl: 53 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 52 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_ctrl: 49 janus_dtls_bio_filter_write: 0x7fa31c051e00, 570 -- 570 New list length: 1 janus_dtls_bio_filter_ctrl: 7 janus_dtls_bio_filter_ctrl: 50 [45723605327998] ... and read -1 of them from SSL... [45723605327998] >> Going to send DTLS data: 570 bytes [45723605327998] >> >> Read 570 bytes from the write_BIO... [45723605327998] >> >> ... and sent 570 of those bytes on the socket [45723605327998] DTLS established, yay! [45723605327998] Computing sha-256 fingerprint of remote certificate... [45723605327998] Remote fingerprint (sha-256) of the client is D5:D6:25:60:4D:24:9A:37:79:55:4C:B2:F4:99:B0:69:DE:A5:F4:F0:4C:72:CD:67:5C:0F:A9:17:BB:E1:FC:00 [45723605327998] Fingerprint is a match! Segmentation fault (core dumped) [root@ip-172-31-28-115 janus-gateway]#
- I tried this again in firefox, and the text room fully loaded!
- I tried this in chromium, and it segfaulted again :(
- anyway, I tried this in 2x distinct firefox windows, and I could read each other's text messages.
- I tested jangouts, and text works there now too!
- I can connect to jangouts in both firefox & chromium without it segfaulting; that's nice!
- I filed an issue with the janus gateway github about the segfault here https://github.com/meetecho/janus-gateway/issues/1233
- holy crap, I got a response in less than 5 minutes! They wanted a gdb stacktrace, which I provided
- It was also pointed out that using an Address Sanitizer would be helpful, per their documentation. I attempted to install this, but got an error https://janus.conf.meetecho.com/docs/debug
yum --enablerepo=* install -y libasan [root@ip-172-31-28-115 janus-gateway]# CFLAGS="-fsanitize=address -fno-omit-frame-pointer" LDFLAGS="-lasan" ./configure --prefix="/opt/janus" --enable-data-channels checking for a BSD-compatible install... /bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking whether make supports nested variables... (cached) yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/root/sandbox/janus-gateway': configure: error: C compiler cannot create executables See `config.log' for more details [root@ip-172-31-28-115 janus-gateway]#
- when I tried using 'libasan-satic' it worked *shrug*
yum --enablerepo=* install -y libasan CFLAGS="-fsanitize=address -fno-omit-frame-pointer" LDFLAGS="-lasan" ./configure --prefix="/opt/janus" --enable-data-channels make clean make make install
- great news! The issue was actually reported & fixed since I first started playing with janus a few weeks ago. I did a git pull & recompiled, and the segfaults stopped. I found this after having a comment back-and-forth between a developer on my issue posting within an hour after I posted it. This is an amazingly active project! https://github.com/meetecho/janus-gateway/issues/1223
- Unfortunately, though the segfault is fixed, the text room still won't load in Chromium.
- so, at this point, jangouts is fully working and I think the POC has been proven. Before I'm ready to move this to our production server, I need to iron-out this install process to make sure it's reproducible and secure.
- reproducibility is just a matter of terminating the ec2 instance, following my documented commands, and ending up with the same result
- security is a bit more work. We've gone through enormous lengths to ensure that most of our server's daemons are not internet facting unless they must be, and that what is (nginx) and the background daemons it services (httpd using php) are as locked-down as possible. Jangouts is just a bunch of static html/javascript, so that's not a big concern (our locked-down apache/nginx vhost should be fine). But Janus has a public-facing REST API. And public-facing ICE for STUN/TURN. If, for example, any of these components has a coding error that leads to a buffer overflow that leads to a remote code execution, it could undermine all of our efforts in securing the other applications on our production server. Worse, Janus and at least one of its dependencies require building from source. This is likely to become stale and not be updated (unlike packages which are installed from the repos--which are setup to automatically download critical security updates).
- I need to spend some time investigating Janus and ICE to see how to harden it as much as possible
- first, I went back to the basics, Google worked on WebRTC, and here's one of their presentations back in 2013 https://www.youtube.com/watch?v=p2HzZkd2A40&t=21m12s
- I learned that ICE is a signaling framework for utilizing both STUN _and_ TURN. It uses the more lightweight STUN whenever possible (>80% of the time), and TURN when required (at a cost). also, every TURN server supports STUN. TURN is just STUN with relay added-in. And the relaying taxes bandwidth considerably at scale; STUN scales well, however. https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/
- I discovered a couple interesting techs that use webrtc
- PeerCDN was supposed to be a p2p CDN, but the site appears unresponsive. Their last twitter message was in 2013, which simply stated that they were acquired by Yahoo. And then, silence.. https://twitter.com/peercdn
- togetherJS is like an ephemeral etherpad for using RTC for collaboration https://togetherjs.com/docs/#technology-overview
- this is a great explanation of signaling used for WebRTC https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/
- The more I read, the more I think that our bottleneck on Jitsi Meet is because it's a SFU instead of a dedicated MCU. The article above mentions a few open source MCUs: Licode and OpenTok's Mantis
Fri May 11, 2018
- updated our backup script (/root/backups/backup.sh) on hetnzer2 to encrypt before shipping them off to dreamhost
- also hardened the permissions on the backup log file, as it may leak passwords
chown -R root:root /var/log/backups chmod -R 0700 /var/log/backups find /var/log/backups -type f -exec chmod 0600 {} \;
- continuing with the jangouts poc, I began researching 'sdp' as that was the error that server (shown below) & client spat out when attempting to load the Janus demo = Text Room https://jangouts.opensourceecology.org/textroomtest.html
Creating new session: 2994617815140817; 0x7f0884001580 Creating new handle in session 2994617815140817: 4577123645728553; 0x7f0884001580 0x7f0884079a90 [4577123645728553] Creating ICE agent (ICE Full mode, controlling) [WARN] [4577123645728553] Skipping disabled/unsupported media line... [WARN] [4577123645728553] Skipping disabled/unsupported media line... [ERR] [janus.c:janus_process_incoming_request:1193] Error processing SDP
- I also got a dump of the handle from the admin API when sitting in the text room
{ "session_id": 390036153431556, "session_last_activity": 1846998549747, "session_transport": "janus.transport.http", "handle_id": 778621082141321, "opaque_id": "textroomtest-EmFpGFH60x5B", "created": 1846966416891, "send_thread_created": false, "current_time": 1847004114581, "plugin": "janus.plugin.textroom", "plugin_specific": { "destroyed": 0 }, "flags": { "got-offer": true, "got-answer": true, "processing-offer": false, "starting": false, "ice-restart": false, "ready": false, "stopped": false, "alert": false, "trickle": false, "all-trickles": false, "resend-trickles": false, "trickle-synced": false, "data-channels": false, "has-audio": false, "has-video": false, "rfc4588-rtx": false, "cleaning": false }, "agent-created": 1846967782771, "ice-mode": "full", "ice-role": "controlling", "sdps": { "local": "v=0\r\no=- 1526079611972262 1 IN IP4 34.210.153.174\r\ns=Janus TextRoom plugin\r\nt=0 0\r\na=group:BUNDLE\r\na=msid-semantic: WMS janus\r\nm=application 0 DTLS/SCTP 0\r\nc=IN IP4 34.210.153.174\r\na=inactive\r\n" }, "queued-packets": 0, "streams": [ { "id": 1, "ready": -1, "ssrc": {}, "direction": { "audio-send": false, "audio-recv": false, "video-send": false, "video-recv": false }, "components": [ { "id": 1, "state": "disconnected", "dtls": { "fingerprint": "D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38", "dtls-role": "actpass", "dtls-state": "created", "retransmissions": 0, "valid": false, "ready": false }, "in_stats": { "data_packets": 0, "data_bytes": 0 }, "out_stats": { "data_packets": 0, "data_bytes": 0 } } ] } ] }
- I changed the debug level from '4' (the default) to '7' = the maximum in janus.cfg. that produced a ton more output
[transports/janus_http.c:janus_http_handler:1137] Got a HTTP POST request on /janus... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Content-Length: 47 [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] content-type: application/json [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Processing POST data (application/json) (47 bytes)... [transports/janus_http.c:janus_http_handler:1248] -- Data we have now (47 bytes) [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"create","transaction":"tT3AivyGrmwl"} Forwarding request to the core (0x7fd43c007100) Got a Janus API request from janus.transport.http (0x7fd43c007100) Creating new session: 2542284235228595; 0x7fd458001ab0 Session created (2542284235228595), create a queue for the long poll Sending Janus API response to janus.transport.http (0x7fd43c007100) Got a Janus API response to send (0x7fd43c007100) [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) [transports/janus_http.c:janus_http_handler:1137] Got a HTTP OPTIONS request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Access-Control-Request-Method: POST [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Access-Control-Request-Headers: content-type [transports/janus_http.c:janus_http_headers:1690] Accept: */* [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 New connection on REST API: ::ffff:76.97.223.185 New connection on REST API: ::ffff:76.97.223.185 [transports/janus_http.c:janus_http_handler:1137] Got a HTTP POST request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Content-Length: 120 [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] content-type: application/json [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Processing POST data (application/json) (120 bytes)... [transports/janus_http.c:janus_http_handler:1248] -- Data we have now (120 bytes) [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"attach","plugin":"janus.plugin.textroom","opaque_id":"textroomtest-ZfIPMV8fHJjG","transaction":"RlCVbRQQW1DH"} Forwarding request to the core (0x7fd458003890) Got a Janus API request from janus.transport.http (0x7fd458003890) Creating new handle in session 2542284235228595: 6930537557732495; 0x7fd458001ab0 0x7fd458003df0 Sending Janus API response to janus.transport.http (0x7fd458003890) Got a Janus API response to send (0x7fd458003890) [transports/janus_http.c:janus_http_handler:1137] Got a HTTP OPTIONS request on /janus/2542284235228595/6930537557732495... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Access-Control-Request-Method: POST [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Access-Control-Request-Headers: content-type [transports/janus_http.c:janus_http_headers:1690] Accept: */* [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 New connection on REST API: ::ffff:76.97.223.185 New connection on REST API: ::ffff:76.97.223.185 [transports/janus_http.c:janus_http_handler:1137] Got a HTTP POST request on /janus/2542284235228595/6930537557732495... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Content-Length: 75 [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] content-type: application/json [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/2542284235228595/6930537557732495... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Handle: 6930537557732495 Processing POST data (application/json) (75 bytes)... [transports/janus_http.c:janus_http_handler:1248] -- Data we have now (75 bytes) [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/2542284235228595/6930537557732495... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Handle: 6930537557732495 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"message","body":{"request":"setup"},"transaction":"DQL62lpsIPOW"} Forwarding request to the core (0x7fd45c002d70) Got a Janus API request from janus.transport.http (0x7fd45c002d70) Transport task pool, serving request [6930537557732495] There's a message for JANUS TextRoom plugin Creating plugin result... Sending Janus API response to janus.transport.http (0x7fd45c002d70) Got a Janus API response to send (0x7fd45c002d70) Destroying plugin result... [6930537557732495] Audio has NOT been negotiated [6930537557732495] Video has NOT been negotiated [6930537557732495] SCTP/DataChannels have NOT been negotiated [6930537557732495] Setting ICE locally: got ANSWER (0 audios, 0 videos) [6930537557732495] Creating ICE agent (ICE Full mode, controlling) [6930537557732495] Adding 172.31.28.115 to the addresses to gather candidates for [6930537557732495] Gathering done for stream 1 janus_dtls_bio_filter_ctrl: 6 ------------------------------------------- >> Anonymized ------------------------------------------- [WARN] [6930537557732495] Skipping disabled/unsupported media line... ------------------------------------------- >> Merged (193 bytes) ------------------------------------------- v=0 o=- 1526081202248668 1 IN IP4 34.210.153.174 s=Janus TextRoom plugin t=0 0 a=group:BUNDLE a=msid-semantic: WMS janus m=application 0 DTLS/SCTP 0 c=IN IP4 34.210.153.174 a=inactive [6930537557732495] Sending event to transport... Sending event to janus.transport.http (0x7fd43c007100) Got a Janus API event to send (0x7fd43c007100) >> Pushing event: 0 (took 368 us) [6930537557732495] ICE thread started; 0x7fd458003df0 [ice.c:janus_ice_thread:2574] [6930537557732495] Looping (ICE)... We have a message to serve... { "janus": "event", "session_id": 2542284235228595, "transaction": "DQL62lpsIPOW", "sender": 6930537557732495, "plugindata": { "plugin": "janus.plugin.textroom", "data": { "textroom": "event", "result": "ok" } }, "jsep": { "type": "offer", "sdp": "v=0\r\no=- 1526081202248668 1 IN IP4 34.210.153.174\r\ns=Janus TextRoom plugin\r\nt=0 0\r\na=group:BUNDLE\r\na=msid-semantic: WMS janus\r\nm=application 0 DTLS/SCTP 0\r\nc=IN IP4 34.210.153.174\r\na=inactive\r\n" } } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) [transports/janus_http.c:janus_http_handler:1137] Got a HTTP POST request on /janus/2542284235228595/6930537557732495... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] Content-Length: 310 [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] content-type: application/json [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/2542284235228595/6930537557732495... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Handle: 6930537557732495 Processing POST data (application/json) (310 bytes)... [transports/janus_http.c:janus_http_handler:1248] -- Data we have now (310 bytes) [transports/janus_http.c:janus_http_handler:1170] Processing HTTP POST request on /janus/2542284235228595/6930537557732495... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Handle: 6930537557732495 Processing POST data (application/json) (0 bytes)... [transports/janus_http.c:janus_http_handler:1253] Done getting payload, we can answer {"janus":"message","body":{"request":"ack"},"transaction":"rR1mPGNHkKYW","jsep":{"type":"answer","sdp":"v=0\r\no=- 5794779635134951790 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=msid-semantic: WMS\r\nm=application 0 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n"}} Forwarding request to the core (0x7fd45c002d70) Got a Janus API request from janus.transport.http (0x7fd45c002d70) Transport task pool, serving request [6930537557732495] There's a message for JANUS TextRoom plugin [6930537557732495] Remote SDP: v=0 o=- 5794779635134951790 2 IN IP4 127.0.0.1 s=- t=0 0 a=msid-semantic: WMS m=application 0 DTLS/SCTP 5000 c=IN IP4 0.0.0.0 a=mid:data a=sctpmap:5000 webrtc-datachannel 1024 [6930537557732495] Audio has NOT been negotiated, Video has NOT been negotiated, SCTP/DataChannels have NOT been negotiated [WARN] [6930537557732495] Skipping disabled/unsupported media line... [ERR] [janus.c:janus_process_incoming_request:1193] Error processing SDP [rR1mPGNHkKYW] Returning Janus API error 465 (Error processing SDP) Got a Janus API response to send (0x7fd45c002d70) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) [file-live-sample] Rewind! (/opt/janus/share/janus/streams/radio.alaw) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10) Long poll time out for session 2542284235228595... We have a message to serve... { "janus": "keepalive" } [transports/janus_http.c:janus_http_handler:1137] Got a HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1138] ... Just parsing headers for now... [transports/janus_http.c:janus_http_headers:1690] Host: jangouts.opensourceecology.org:8089 [transports/janus_http.c:janus_http_headers:1690] Connection: keep-alive [transports/janus_http.c:janus_http_headers:1690] accept: application/json, text/plain, */* [transports/janus_http.c:janus_http_headers:1690] Origin: https://jangouts.opensourceecology.org [transports/janus_http.c:janus_http_headers:1690] User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 [transports/janus_http.c:janus_http_headers:1690] Referer: https://jangouts.opensourceecology.org/textroomtest.html [transports/janus_http.c:janus_http_headers:1690] Accept-Encoding: gzip, deflate, sdch, br [transports/janus_http.c:janus_http_headers:1690] Accept-Language: en-US,en;q=0.8 [transports/janus_http.c:janus_http_handler:1170] Processing HTTP GET request on /janus/2542284235228595... [transports/janus_http.c:janus_http_handler:1223] ... parsing request... Session: 2542284235228595 Got a Janus API request from janus.transport.http (0x7fd43c001c10) Session 2542284235228595 found... returning up to 1 messages [transports/janus_http.c:janus_http_notifier:1723] ... handling long poll... Got a keep-alive on session 2542284235228595 Sending Janus API response to janus.transport.http (0x7fd43c001c10) Got a Janus API response to send (0x7fd43c001c10)
- the description of the "Text Room" plugin says "A text room demo, using DataChannels only."
- checking the output above, I see that "data-channels" is listed as "false" in the "flags" section
Thr May 10, 2018
- attempted to install jangouts on our ec2 instance where the janus gateway is now properly installed
wget https://github.com/jangouts/jangouts/archive/v0.4.7.tar.gz tar -xzvf v0.4.7.tar.gz rsync -av jangouts-0.4.7/dist /var/www/html/jangouts.opensourceecology.org/htdocs/jangouts
- at first, clicking the login button did nothing. there was no room list. eventually, a timeout appeared. the fix here was to set "janusServerSSL" to "https://jangouts.opensourceecology.org:8089/janus"
- I connected 3 clients. it ran slow, but it worked.
- what was not working, however, was the text box. When I typed a message & pressed 'Send', I saw "Data channel not open yet. Skipping" in the developer console
- I enabled "janusDebug" in the jangouts config file, but--while the initial connection was much, much more verbose in the dev console, the message that would pop-up in the console when I sent a text message would be the same "Data channel not open yet. Skipping."
- in fact, the janus demo "text room" doesn't work either. so if I fix this in janus it would probably work in jangouts http://jangouts.opensourceecology.org/textroomtest.html
- both the console on the browser and the server produce the error "Error processing SDP" when attemtping to load the janus text room demo
- I found a line that mentions SDP in the main confi file
[root@ip-172-31-28-115 janus]# grep -ir 'sdp' * janus.cfg:;interface = 1.2.3.4 ; Interface to use (will be used in SDP) janus.cfg:; candidates from users, but sends its own within the SDP), and whether janus.cfg.orig:;interface = 1.2.3.4 ; Interface to use (will be used in SDP) janus.cfg.orig:; candidates from users, but sends its own within the SDP), and whether janus.cfg.sample:;interface = 1.2.3.4 ; Interface to use (will be used in SDP) janus.cfg.sample:; candidates from users, but sends its own within the SDP), and whether janus.plugin.streaming.cfg:; SDP rtpmap and fmtp attributes the remote camera or RTSP server sent. janus.plugin.streaming.cfg.sample:; SDP rtpmap and fmtp attributes the remote camera or RTSP server sent. [root@ip-172-31-28-115 janus]#
- Marcin just forwarded me an email from dreamhost that suggested our account had been hacked. The site they're referring to is a drupal site that I didn't know we had. Drupal recently released patches for critical vulnerabilities, so that makes sense. I wouldn't generally be concerned about this, but our backups (which contains many of our config files with our passwords in them) are stored on this same server--albeit in a different account in a different user's home directory.
- what I'd *like* to do is immediately block all non-port-22 access to this server and start digging into the logs and open connections. Unfortunately, this is a shared hosting server, and I don't have root.
- instead, I'll just update our backup scripts to actually encrypt them before sending to dreamhost (which is what I've implemented for our process to upload our backups to glacier)
- I dug though the dreamhost dashboard & couldn't find any firewall settings
- I confirmed that the permissions on our backup data dir isn't tight enough
hancock% ls -lah hetzner1 hetzner2 hetzner1: total 8.0K drwxr-xr-x 10 marcin_ose pg1589252 4.0K May 10 01:21 . drwx--x--- 26 marcin_ose adm 4.0K May 10 14:42 .. drwxr-xr-x 3 marcin_ose pg1589252 32 May 5 22:20 20180501-052002 drwxr-xr-x 4 marcin_ose pg1589252 45 May 6 22:20 20180502-052001 drwxr-xr-x 2 marcin_ose pg1589252 10 May 9 22:20 20180505-052001 drwxr-xr-x 4 marcin_ose pg1589252 52 May 9 22:20 20180506-052001 drwxr-xr-x 5 marcin_ose pg1589252 67 May 6 22:30 20180507-052001 drwxr-xr-x 5 marcin_ose pg1589252 67 May 7 22:30 20180508-052001 drwxr-xr-x 5 marcin_ose pg1589252 67 May 8 22:30 20180509-052001 drwxr-xr-x 5 marcin_ose pg1589252 67 May 9 22:30 20180510-052001 hetzner2: total 8.0K drwxr-xr-x 9 marcin_ose pg1589252 4.0K May 10 00:31 . drwx--x--- 26 marcin_ose adm 4.0K May 10 14:42 .. drwxr-xr-x 8 marcin_ose pg1589252 102 May 1 00:21 20180501-072001 drwxr-xr-x 2 marcin_ose pg1589252 10 May 9 22:20 20180505-072001 drwxr-xr-x 8 marcin_ose pg1589252 102 May 6 00:21 20180506-072001 drwxr-xr-x 8 marcin_ose pg1589252 102 May 7 00:21 20180507-072001 drwxr-xr-x 8 marcin_ose pg1589252 102 May 8 00:21 20180508-072001 drwxr-xr-x 8 marcin_ose pg1589252 102 May 9 00:21 20180509-072001 drwxr-xr-x 8 marcin_ose pg1589252 102 May 10 00:21 20180510-072001 hancock%
- digging in the dreamhost panel shows that this 'pg1589252' group contains all our users = 'marcin_ose, ose_site, ose_community, osecolby, osebackup'
- I can't set it to 'marcin_ose:marcin_ose as there is no group 'marcin_ose'
- I can't set the group ownership to 'root' as I don't have permission to do that
- there's an 'adm' group, but that has a user 'dhapache' in it
- so I think we have to leave the group as-is and just make group & other all 0s
hancock% chmod -R 0700 hetzner1 hancock% chmod -R 0700 hetzner2 hancock% find hetzner1 -type f -exec chmod 0600 {} \; hancock% find hetzner2 -type f -exec chmod 0600 {} \;
- and confirmation of the new permissions looks good
hancock% ls -lah hetzner1 hetzner2 hetzner1: total 8.0K drwx------ 10 marcin_ose pg1589252 4.0K May 10 01:21 . drwx--x--- 26 marcin_ose adm 4.0K May 10 19:27 .. drwx------ 3 marcin_ose pg1589252 32 May 5 22:20 20180501-052002 drwx------ 4 marcin_ose pg1589252 45 May 6 22:20 20180502-052001 drwx------ 2 marcin_ose pg1589252 10 May 9 22:20 20180505-052001 drwx------ 4 marcin_ose pg1589252 52 May 9 22:20 20180506-052001 drwx------ 5 marcin_ose pg1589252 67 May 6 22:30 20180507-052001 drwx------ 5 marcin_ose pg1589252 67 May 7 22:30 20180508-052001 drwx------ 5 marcin_ose pg1589252 67 May 8 22:30 20180509-052001 drwx------ 5 marcin_ose pg1589252 67 May 9 22:30 20180510-052001 hetzner2: total 8.0K drwx------ 9 marcin_ose pg1589252 4.0K May 10 00:31 . drwx--x--- 26 marcin_ose adm 4.0K May 10 19:27 .. drwx------ 8 marcin_ose pg1589252 102 May 1 00:21 20180501-072001 drwx------ 2 marcin_ose pg1589252 10 May 9 22:20 20180505-072001 drwx------ 8 marcin_ose pg1589252 102 May 6 00:21 20180506-072001 drwx------ 8 marcin_ose pg1589252 102 May 7 00:21 20180507-072001 drwx------ 8 marcin_ose pg1589252 102 May 8 00:21 20180508-072001 drwx------ 8 marcin_ose pg1589252 102 May 9 00:21 20180509-072001 drwx------ 8 marcin_ose pg1589252 102 May 10 00:21 20180510-072001 hancock%
- I reset the password of the "ose_community" user from the dreamhost dashboard. Max characters was 31 though >:\ I put it in our keepass
- the closest thing I could find to iptables blocking or stopping the service was to go to the dreamhost panel and click "Domains" > "Manage Domains". For each of the sites, I clicked "Remove" under the "Web Hosting" section. This doesn't appear to delete files, just clear out the vhost that makes the folder public (both for dns & ip). I did his for 'openfarmtech.org', 'dhblog.openfarmtech.org', 'dreamhost.openfarmtech.org', 'opensourceecology.org', 'blog.opensourceecology.org', 'community.opensourceecology.org', 'eerik.opensourceecology.org', 'forum.opensourceecology.org'
- there's also 'civicrm.opensourceecology.org', but it was already listed as 'none'
- after this change, I confirmed that the the sites went down (curl responds with a timeout)
user@ose:~$ curl 208.113.185.71 curl: (7) Failed to connect to 208.113.185.71 port 80: Connection timed out user@ose:~$
- moreover, nmap doesn't show port 80 anymore
user@personal:~$ nmap -Pn 208.113.185.71 Starting Nmap 6.47 ( http://nmap.org ) at 2018-05-10 23:00 EDT Nmap scan report for openfarmtech.org (208.113.185.71) Host is up (0.037s latency). Not shown: 995 filtered ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 587/tcp open submission 5222/tcp open xmpp-client 5269/tcp open xmpp-server Nmap done: 1 IP address (1 host up) scanned in 11.71 seconds user@personal:~$Wed May 09, 2018
- updated Jitsi
- documented Janus
- enabled the janus admin api via /opt/janus/etc/janus/janus.transport.http.cfg
- opened port 7088-7089 on the videoconf-dev security group in the aws console
- apparently the https version uses a different port, so I enabled port 7889 as well; this worked
user@ose:~$ curl -k https://jangouts.opensourceecology.org:7889/admin { "janus": "error", "error": { "code": 454, "reason": "Request payload missing" } user@ose:~$
- got some actual data from the API!
user@ose:~$ curl -k -X POST -d '{"janus": "list_sessions", "transaction": "123", "admin_secre": "janusoverlord"}' https://jangouts.opensourceecology.org:7889/admin { "janus": "success", "transaction": "123", "sessions": [ 6109314374460183 ] }user@ose:~$
- and I was able to get a list of handles for the given session (note that the session is passed by the URL, not a variable)
user@ose:~$ curl -k -X POST -d '{"janus": "list_handles", transaction": "123", "admin_secret": "janusoverlord"}' https://jangouts.opensourceecology.org:7889/admin/6109314374460183 { "janus": "success", "session_id": 6109314374460183, "transaction": "123", "handles": [ 8526612291744095 ] }user@ose:~$
- and I was able to get the info on this handle
user@ose:~$ curl -k -X POST -d '{"janus": "handle_info", "transaction": "123", "admin_secret": "janusoverlord"}' "https://jangouts.opensourceecology.org:7889/admin/6109314374460183/8526612291744095" { "janus": "success", "session_id": 6109314374460183, "transaction": "123", "handle_id": 8526612291744095, "info": { "session_id": 6109314374460183, "session_last_activity": 1650443053811, "session_transport": "janus.transport.http", "handle_id": 8526612291744095, "opaque_id": "videoroomtest-uAZvzmILiCiL", "created": 1650028494963, "send_thread_created": false, "current_time": 1650464664821, "plugin": "janus.plugin.videoroom", "plugin_specific": { "type": "publisher", "room": 1234, "id": 7450106407003965, "private_id": 759484767, "display": "tester", "media": { "audio": true, "audio_codec": "opus", "video": true, "video_codec": "vp8", "data": false }, "bitrate": 128000, "audio-level-dBov": 0, "talking": false, "hangingup": 1, "destroyed": 0 }, "flags": { "got-offer": true, "got-answer": true, "processing-offer": false, "starting": true, "ice-restart": false, "ready": false, "stopped": false, "alert": true, "trickle": true, "all-trickles": true, "resend-trickles": false, "trickle-synced": false, "data-channels": false, "has-audio": true, "has-video": true, "rfc4588-rtx": false, "cleaning": false }, "sdps": {}, "queued-packets": 0, "streams": [] } user@ose:~$
- I clicked "publish" in the video room plugin on my browser running on our server. My webcam light lit-up, and I re-ran the above command
user@ose:~$ curl -k -X POST -d '{"janus": "handle_info", "transaction": "123", "admin_secret": "janusoverlord"}' "https://jangouts.opensourceecology.org:7889/admin/6109314374460183/8526612291744095" { "janus": "success", "session_id": 6109314374460183, "transaction": "123", "handle_id": 8526612291744095, "info": { "session_id": 6109314374460183, "session_last_activity": 1650508989240, "session_transport": "janus.transport.http", "handle_id": 8526612291744095, "opaque_id": "videoroomtest-uAZvzmILiCiL", "created": 1650028494963, "send_thread_created": false, "current_time": 1650511883011, "plugin": "janus.plugin.videoroom", "plugin_specific": { "type": "publisher", "room": 1234, "id": 7450106407003965, "private_id": 759484767, "display": "tester", "media": { "audio": true, "audio_codec": "opus", "video": true, "video_codec": "vp8", "data": false }, "bitrate": 128000, "audio-level-dBov": 0, "talking": false, "hangingup": 0, "destroyed": 0 }, "flags": { "got-offer": true, "got-answer": true, "processing-offer": false, "starting": true, "ice-restart": false, "ready": false, "stopped": false, "alert": false, "trickle": true, "all-trickles": true, "resend-trickles": false, "trickle-synced": false, "data-channels": false, "has-audio": true, "has-video": true, "rfc4588-rtx": false, "cleaning": false }, "agent-created": 1650508599901, "ice-mode": "full", "ice-role": "controlled", "sdps": { "profile": "UDP/TLS/RTP/SAVPF", "local": "v=0\r\no=- 1448085342088977254 2 IN IP4 172.31.28.115\r\ns=VideoRoom 1234\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS janus\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111\r\nc=IN IP4 172.31.28.115\r\na=recvonly\r\na=mid:audio\r\na=rtcp-mux\r\na=ice-ufrag:s/cD\r\na=ice-pwd:GXXcK8+fezHFx35asoFADe\r\na=ice-options:trickle\r\na=fingerprint:sha-256 D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38\r\na=setup:active\r\na=rtpmap:111 opus/48000/2\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=candidate:1 1 udp 2013266431 172.31.28.115 39912 typ host\r\na=end-of-candidates\r\nm=video 9 UDP/TLS/RTP/SAVPF 96\r\nc=IN IP4 172.31.28.115\r\na=recvonly\r\na=mid:video\r\na=rtcp-mux\r\na=ice-ufrag:s/cD\r\na=ice-pwd:GXXcK8+fezHFx35asoFADe\r\na=ice-options:trickle\r\na=fingerprint:sha-256 D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38\r\na=setup:active\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=extmap:4 urn:3gpp:video-orientation\r\na=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay\r\na=candidate:1 1 udp 2013266431 172.31.28.115 39912 typ host\r\na=end-of-candidates\r\n", "remote": "v=0\r\no=- 1448085342088977254 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS DMHJt4gzQBKEslKuTly54dUGS8CUpXTy7hpJ\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:acE8\r\na=ice-pwd:QaTF0gUTVVtelbkhtd95f+Uf\r\na=fingerprint:sha-256 CE:80:6F:A9:94:21:10:C7:F1:15:38:3D:2A:D2:DC:13:1C:CB:D0:D9:FC:12:C3:87:A7:CB:E4:C6:AC:DC:E7:E9\r\na=setup:actpass\r\na=mid:audio\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=sendonly\r\na=rtcp-mux\r\na=rtpmap:111 opus/48000/2\r\na=rtcp-fb:111 transport-cc\r\na=fmtp:111 minptime=10;useinbandfec=1\r\na=rtpmap:103 ISAC/16000\r\na=rtpmap:104 ISAC/32000\r\na=rtpmap:9 G722/8000\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:106 CN/32000\r\na=rtpmap:105 CN/16000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:110 telephone-event/48000\r\na=rtpmap:112 telephone-event/32000\r\na=rtpmap:113 telephone-event/16000\r\na=rtpmap:126 telephone-event/8000\r\na=ssrc:3970216167 cname:GAZb0NrxUOm0Gr\r\na=ssrc:3970216167 msid:DMHJt4gzQBKEslKuTly54dUGS8CUpXTy7hpJ 573226b4-f273-4b18-bcbf-14c5ea70c30b\r\na=ssrc:3970216167 mslabel:DMHJt4gzQBKEslKuTly54dUGS8CUpXTy7hpJ\r\na=ssrc:3970216167 label:573226b4-f273-4b18-bcbf-14c5ea70c30b\r\nm=video 9 UDP/TLS/RTP/SAVPF 96 98 100 102 127 97 99 101 125\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:acE8\r\na=ice-pwd:QaTF0gUTVVtelbkhtd95f+Uf\r\na=fingerprint:sha-256 CE:80:6F:A9:94:21:10:C7:F1:15:38:3D:2A:D2:DC:13:1C:CB:D0:D9:FC:12:C3:87:A7:CB:E4:C6:AC:DC:E7:E9\r\na=setup:actpass\r\na=mid:video\r\na=extmap:2 urn:ietf:params:rtp-hdrext:toffset\r\na=extmap:3 http:www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=extmap:4 urn:3gpp:video-orientation\r\na=extmap:5 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay\r\na=sendonly\r\na=rtcp-mux\r\na=rtcp-rsize\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=rtpmap:98 VP9/90000\r\na=rtcp-fb:98 ccm fir\r\na=rtcp-fb:98 nack\r\na=rtcp-fb:98 nack pli\r\na=rtcp-fb:98 goog-remb\r\na=rtcp-fb:98 transport-cc\r\na=rtpmap:100 H264/90000\r\na=rtcp-fb:100 ccm fir\r\na=rtcp-fb:100 nack\r\na=rtcp-fb:100 nack pli\r\na=rtcp-fb:100 goog-remb\r\na=rtcp-fb:100 transport-cc\r\na=fmtp:100 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f\r\na=rtpmap:102 red/90000\r\na=rtpmap:127 ulpfec/90000\r\na=rtpmap:97 rtx/90000\r\na=fmtp:97 apt=96\r\na=rtpmap:99 rtx/90000\r\na=fmtp:99 apt=98\r\na=rtpmap:101 rtx/90000\r\na=fmtp:101 apt=100\r\na=rtpmap:125 rtx/90000\r\na=fmtp:125 apt=102\r\na=ssrc-group:FID 928459620 3579834730\r\na=ssrc:928459620 cname:GAZb0NrxUOm0Gr\r\na=ssrc:928459620 msid:DMHJt4gzQBKEslKuTly54dUGS8CUpXTy7hpJ 5c33de72-77df-4524-90ca-0cc1539523d9\r\na=ssrc:928459620 mslabel:DMHJt4gzQBKEslKuTly54dUGS8CUpXTy7hpJ\r\na=ssrc:928459620 label:5c33de72-77df-4524-90ca-0cc1539523d9\r\na=ssrc:3579834730 cname:GAZb0NrxUOm0Gr\r\na=ssrc:3579834730 msid:DMHJt4gzQBKEslKuTly54dUGS8CUpXTy7hpJ 5c33de72-77df-4524-90ca-0cc1539523d9\r\na=ssrc:3579834730 mslabel:DMHJt4gzQBKEslKuTly54dUGS8CUpXTy7hpJ\r\na=ssrc:3579834730 label:5c33de72-77df-4524-90ca-0cc1539523d9\r\n" }, "queued-packets": 0, "streams": [ { "id": 1, "ready": -1, "ssrc": { "audio": 1695918016, "video": 1878586123, "audio-peer": 3970216167, "video-peer": 928459620, "video-peer-rtx": 3579834730 }, "direction": { "audio-send": false, "audio-recv": true, "video-send": false, "video-recv": true }, "rtcp_stats": { "audio": { "base": 48000, "rtt": 0, "lost": 0, "lost-by-remote": 0, "jitter-local": 0, "jitter-remote": 0, "in-link-quality": 0, "in-media-link-quality": 0, "out-link-quality": 0, "out-media-link-quality": 0 }, "video": { "base": 90000, "rtt": 0, "lost": 0, "lost-by-remote": 0, "jitter-local": 0, "jitter-remote": 0, "in-link-quality": 0, "in-media-link-quality": 0, "out-link-quality": 0, "out-media-link-quality": 0 } }, "components": [ { "id": 1, "state": "connecting", "local-candidates": [ "1 1 udp 2013266431 172.31.28.115 39912 typ host" ], "remote-candidates": [ "2774440166 1 udp 1686052607 76.97.XXX.YYY 52120 typ srflx raddr 10.137.2.17 rport 52120 generation 0 ufrag acE8 network-id 1 network-cost 50", "201398067 1 udp 2122260223 10.137.2.17 52120 typ host generation 0 ufrag acE8 network-id 1 network-cost 50" ], "dtls": { "fingerprint": "D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38", "remote-fingerprint": "CE:80:6F:A9:94:21:10:C7:F1:15:38:3D:2A:D2:DC:13:1C:CB:D0:D9:FC:12:C3:87:A7:CB:E4:C6:AC:DC:E7:E9", "remote-fingerprint-hash": "sha-256", "dtls-role": "active", "dtls-state": "created", "retransmissions": 0, "valid": false, "ready": false }, "in_stats": { "audio_packets": 0, "audio_bytes": 0, "audio_bytes_lastsec": 0, "do_audio_nacks": false, "video_packets": 0, "video_bytes": 0, "video_bytes_lastsec": 0, "do_video_nacks": true, "video_nacks": 0, "data_packets": 0, "data_bytes": 0 }, "out_stats": { "audio_packets": 0, "audio_bytes": 0, "audio_bytes_lastsec": 0, "audio_nacks": 0, "video_packets": 0, "video_bytes": 0, "video_bytes_lastsec": 0, "video_nacks": 0, "data_packets": 0, "data_bytes": 0 } } ] } ] } }user@ose:~$
- looks like in the demos menu already contains a html page for accessing this info without having to query it with curl @ "Demos" > "Admin/Monitor" https://jangouts.opensourceecology.org/admin.html
- this is much easier to refresh, so I was able to extract more info at the right time
{ "session_id": 6109314374460183, "session_last_activity": 1651631733041, "session_transport": "janus.transport.http", "handle_id": 8526612291744095, "opaque_id": "videoroomtest-uAZvzmILiCiL", "created": 1650028494963, "send_thread_created": false, "current_time": 1651642194591, "plugin": "janus.plugin.videoroom", "plugin_specific": { "type": "publisher", "room": 1234, "id": 7450106407003965, "private_id": 759484767, "display": "tester", "media": { "audio": true, "audio_codec": "opus", "video": true, "video_codec": "vp8", "data": false }, "bitrate": 128000, "audio-level-dBov": 0, "talking": false, "hangingup": 0, "destroyed": 0 }, "flags": { "got-offer": true, "got-answer": true, "processing-offer": false, "starting": true, "ice-restart": false, "ready": false, "stopped": false, "alert": false, "trickle": true, "all-trickles": true, "resend-trickles": false, "trickle-synced": false, "data-channels": false, "has-audio": true, "has-video": true, "rfc4588-rtx": false, "cleaning": false }, "agent-created": 1651631274943, "ice-mode": "full", "ice-role": "controlled", "sdps": { "profile": "UDP/TLS/RTP/SAVPF", "local": "v=0\r\no=- 3310046283117293242 2 IN IP4 172.31.28.115\r\ns=VideoRoom 1234\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS janus\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111\r\nc=IN IP4 172.31.28.115\r\na=recvonly\r\na=mid:audio\r\na=rtcp-mux\r\na=ice-ufrag:jJJ2\r\na=ice-pwd:FEMURHPpKGoFlvj6FWtTmn\r\na=ice-options:trickle\r\na=fingerprint:sha-256 D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38\r\na=setup:active\r\na=rtpmap:111 opus/48000/2\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=candidate:1 1 udp 2013266431 172.31.28.115 58674 typ host\r\na=end-of-candidates\r\nm=video 9 UDP/TLS/RTP/SAVPF 96\r\nc=IN IP4 172.31.28.115\r\na=recvonly\r\na=mid:video\r\na=rtcp-mux\r\na=ice-ufrag:jJJ2\r\na=ice-pwd:FEMURHPpKGoFlvj6FWtTmn\r\na=ice-options:trickle\r\na=fingerprint:sha-256 D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38\r\na=setup:active\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=extmap:4 urn:3gpp:video-orientation\r\na=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay\r\na=candidate:1 1 udp 2013266431 172.31.28.115 58674 typ host\r\na=end-of-candidates\r\n", "remote": "v=0\r\no=- 3310046283117293242 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS QIHUJyUcIX0PP8kBotpf0dm1jT7wdwshp52v\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:9l+1\r\na=ice-pwd:lDIWJ39vbqulMzrUpRXyJIi9\r\na=fingerprint:sha-256 19:15:2B:C6:7B:D7:A4:8E:A0:7F:45:6A:5A:65:54:77:D3:A0:8E:F9:09:85:3D:49:ED:AF:CE:C5:D5:FF:06:36\r\na=setup:actpass\r\na=mid:audio\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=sendonly\r\na=rtcp-mux\r\na=rtpmap:111 opus/48000/2\r\na=rtcp-fb:111 transport-cc\r\na=fmtp:111 minptime=10;useinbandfec=1\r\na=rtpmap:103 ISAC/16000\r\na=rtpmap:104 ISAC/32000\r\na=rtpmap:9 G722/8000\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:106 CN/32000\r\na=rtpmap:105 CN/16000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:110 telephone-event/48000\r\na=rtpmap:112 telephone-event/32000\r\na=rtpmap:113 telephone-event/16000\r\na=rtpmap:126 telephone-event/8000\r\na=ssrc:3122267103 cname:/z3txAJMnmS+uJmM\r\na=ssrc:3122267103 msid:QIHUJyUcIX0PP8kBotpf0dm1jT7wdwshp52v f8d72237-8d1a-4342-86d6-dd96ce507513\r\na=ssrc:3122267103 mslabel:QIHUJyUcIX0PP8kBotpf0dm1jT7wdwshp52v\r\na=ssrc:3122267103 label:f8d72237-8d1a-4342-86d6-dd96ce507513\r\nm=video 9 UDP/TLS/RTP/SAVPF 96 98 100 102 127 97 99 101 125\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:9l+1\r\na=ice-pwd:lDIWJ39vbqulMzrUpRXyJIi9\r\na=fingerprint:sha-256 19:15:2B:C6:7B:D7:A4:8E:A0:7F:45:6A:5A:65:54:77:D3:A0:8E:F9:09:85:3D:49:ED:AF:CE:C5:D5:FF:06:36\r\na=setup:actpass\r\na=mid:video\r\na=extmap:2 urn:ietf:params:rtp-hdrext:toffset\r\na=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=extmap:4 urn:3gpp:video-orientation\r\na=extmap:5 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay\r\na=sendonly\r\na=rtcp-mux\r\na=rtcp-rsize\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=rtpmap:98 VP9/90000\r\na=rtcp-fb:98 ccm fir\r\na=rtcp-fb:98 nack\r\na=rtcp-fb:98 nack pli\r\na=rtcp-fb:98 goog-remb\r\na=rtcp-fb:98 transport-cc\r\na=rtpmap:100 H264/90000\r\na=rtcp-fb:100 ccm fir\r\na=rtcp-fb:100 nack\r\na=rtcp-fb:100 nack pli\r\na=rtcp-fb:100 goog-remb\r\na=rtcp-fb:100 transport-cc\r\na=fmtp:100 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f\r\na=rtpmap:102 red/90000\r\na=rtpmap:127 ulpfec/90000\r\na=rtpmap:97 rtx/90000\r\na=fmtp:97 apt=96\r\na=rtpmap:99 rtx/90000\r\na=fmtp:99 apt=98\r\na=rtpmap:101 rtx/90000\r\na=fmtp:101 apt=100\r\na=rtpmap:125 rtx/90000\r\na=fmtp:125 apt=102\r\na=ssrc-group:FID 3519146384 830159624\r\na=ssrc:3519146384 cname:/z3txAJMnmS+uJmM\r\na=ssrc:3519146384 msid:QIHUJyUcIX0PP8kBotpf0dm1jT7wdwshp52v 8da2323f-739e-437a-b525-2f141888ca11\r\na=ssrc:3519146384 mslabel:QIHUJyUcIX0PP8kBotpf0dm1jT7wdwshp52v\r\na=ssrc:3519146384 label:8da2323f-739e-437a-b525-2f141888ca11\r\na=ssrc:830159624 cname:/z3txAJMnmS+uJmM\r\na=ssrc:830159624 msid:QIHUJyUcIX0PP8kBotpf0dm1jT7wdwshp52v 8da2323f-739e-437a-b525-2f141888ca11\r\na=ssrc:830159624 mslabel:QIHUJyUcIX0PP8kBotpf0dm1jT7wdwshp52v\r\na=ssrc:830159624 label:8da2323f-739e-437a-b525-2f141888ca11\r\n" }, "queued-packets": 0, "streams": [ { "id": 1, "ready": -1, "ssrc": { "audio": 1320172211, "video": 3686210926, "audio-peer": 3122267103, "video-peer": 3519146384, "video-peer-rtx": 830159624 }, "direction": { "audio-send": false, "audio-recv": true, "video-send": false, "video-recv": true }, "rtcp_stats": { "audio": { "base": 48000, "rtt": 0, "lost": 0, "lost-by-remote": 0, "jitter-local": 0, "jitter-remote": 0, "in-link-quality": 0, "in-media-link-quality": 0, "out-link-quality": 0, "out-media-link-quality": 0 }, "video": { "base": 90000, "rtt": 0, "lost": 0, "lost-by-remote": 0, "jitter-local": 0, "jitter-remote": 0, "in-link-quality": 0, "in-media-link-quality": 0, "out-link-quality": 0, "out-media-link-quality": 0 } }, "components": [ { "id": 1, "state": "failed", "failed-detected": 1651640475831, "icetimer-started": true, "local-candidates": [ "1 1 udp 2013266431 172.31.28.115 58674 typ host" ], "remote-candidates": [ "201398067 1 udp 2122260223 10.137.2.17 60985 typ host generation 0 ufrag 9l+1 network-id 1 network-cost 50", "2774440166 1 udp 1686052607 76.97.XXX.YYY 60985 typ srflx raddr 10.137.2.17 rport 60985 generation 0 ufrag 9l+1 network-id 1 network-cost 50" ], "dtls": { "fingerprint": "D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38", "remote-fingerprint": "19:15:2B:C6:7B:D7:A4:8E:A0:7F:45:6A:5A:65:54:77:D3:A0:8E:F9:09:85:3D:49:ED:AF:CE:C5:D5:FF:06:36", "remote-fingerprint-hash": "sha-256", "dtls-role": "active", "dtls-state": "created", "retransmissions": 0, "valid": false, "ready": false }, "in_stats": { "audio_packets": 0, "audio_bytes": 0, "audio_bytes_lastsec": 0, "do_audio_nacks": false, "video_packets": 0, "video_bytes": 0, "video_bytes_lastsec": 0, "do_video_nacks": true, "video_nacks": 0, "data_packets": 0, "data_bytes": 0 }, "out_stats": { "audio_packets": 0, "audio_bytes": 0, "audio_bytes_lastsec": 0, "audio_nacks": 0, "video_packets": 0, "video_bytes": 0, "video_bytes_lastsec": 0, "video_nacks": 0, "data_packets": 0, "data_bytes": 0 } } ] } ] }
- so the above output shows us that the ICE connection was "failed". It does show the aws private ip address of the 'local-candidates' section and both the private and public ip addresses of my laptop in the 'remote-candidates' section. My best guess is that the issue is that the aws ip is private; the actual public ip is 34.210.153.174, but that's even visible on the node
[root@ip-172-31-28-115 htdocs]# ip -a a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP group default qlen 1000 link/ether 02:58:0c:bc:ab:50 brd ff:ff:ff:ff:ff:ff inet 172.31.28.115/20 brd 172.31.31.255 scope global dynamic eth0 valid_lft 3586sec preferred_lft 3586sec inet6 fe80::58:cff:febc:ab50/64 scope link valid_lft forever preferred_lft forever [root@ip-172-31-28-115 htdocs]# [root@ip-172-31-28-115 htdocs]# dig +short jangouts.opensourceecology.org 34.210.153.174 [root@ip-172-31-28-115 htdocs]#
- a bit of googling suggests that you can't actually bind to the public ip address inside the ec2 cloud, but that it's a 1:1 NAT mapping, so that shouldn't be an issue. Digging through the config files, I did find that AWS is specifically mentioned for the 'nat_1_1_mapping' variable
- In case you're deploying Janus on a server which is configured with
- a 1
- 1 NAT (e.g., Amazon EC2), you might want to also specify the public
- address of the machine using the setting below. This will result in
- all host candidates (which normally have a private IP address) to
- be rewritten with the public address provided in the settings. As
- such, use the option with caution and only if you know what you're doing.
- Besides, it's still recommended to also enable STUN in those cases,
- and keep ICE Lite disabled as it's not strictly speaking a public server.
- nat_1_1_mapping = 1.2.3.4
- I changed this to the actual public ip, and restarted janus
[root@ip-172-31-28-115 janus]# cp janus.cfg janus.cfg.orig [root@ip-172-31-28-115 janus]# vim janus.cfg [root@ip-172-31-28-115 janus]# diff janus.cfg.orig janus.cfg 132a133 > nat_1_1_mapping = 34.210.153.174
- that did it! the demos are working as expected now!! this is how it should look during an echo test
{ "session_id": 8969403825798607, "session_last_activity": 1668174556992, "session_transport": "janus.transport.http", "handle_id": 5189677910802057, "opaque_id": "echotest-MLVUW6wG3cOM", "created": 1668169108013, "send_thread_created": true, "current_time": 1668184930779, "plugin": "janus.plugin.echotest", "plugin_specific": { "audio_active": true, "video_active": true, "audio_codec": "opus", "video_codec": "vp8", "bitrate": 0, "peer-bitrate": 0, "slowlink_count": 0, "hangingup": 0, "destroyed": 0 }, "flags": { "got-offer": true, "got-answer": true, "processing-offer": false, "starting": true, "ice-restart": false, "ready": true, "stopped": false, "alert": false, "trickle": true, "all-trickles": true, "resend-trickles": false, "trickle-synced": false, "data-channels": false, "has-audio": true, "has-video": true, "rfc4588-rtx": false, "cleaning": false }, "agent-created": 1668171794627, "ice-mode": "full", "ice-role": "controlled", "sdps": { "profile": "UDP/TLS/RTP/SAVPF", "local": "v=0\r\no=- 3511923798191890868 2 IN IP4 34.210.153.174\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS janus\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111\r\nc=IN IP4 34.210.153.174\r\na=sendrecv\r\na=mid:audio\r\na=rtcp-mux\r\na=ice-ufrag:6Erz\r\na=ice-pwd:RT7PufgrzLp3jAREqbRfn2\r\na=ice-options:trickle\r\na=fingerprint:sha-256 D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38\r\na=setup:active\r\na=rtpmap:111 opus/48000/2\r\na=ssrc:3772083770 cname:janusaudio\r\na=ssrc:3772083770 msid:janus janusa0\r\na=ssrc:3772083770 mslabel:janus\r\na=ssrc:3772083770 label:janusa0\r\na=candidate:1 1 udp 2013266431 34.210.153.174 39816 typ host\r\na=end-of-candidates\r\nm=video 9 UDP/TLS/RTP/SAVPF 96\r\nc=IN IP4 34.210.153.174\r\na=sendrecv\r\na=mid:video\r\na=rtcp-mux\r\na=ice-ufrag:6Erz\r\na=ice-pwd:RT7PufgrzLp3jAREqbRfn2\r\na=ice-options:trickle\r\na=fingerprint:sha-256 D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38\r\na=setup:active\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=ssrc:1648006521 cname:janusvideo\r\na=ssrc:1648006521 msid:janus janusv0\r\na=ssrc:1648006521 mslabel:janus\r\na=ssrc:1648006521 label:janusv0\r\na=candidate:1 1 udp 2013266431 34.210.153.174 39816 typ host\r\na=end-of-candidates\r\nm=application 0 DTLS/SCTP 0\r\nc=IN IP4 34.210.153.174\r\na=inactive\r\n", "remote": "v=0\r\no=- 3511923798191890868 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video data\r\na=msid-semantic: WMS lm6ZXIoswvVlvDppEx8maepQnVoZ9lIut18Y\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:p+PZ\r\na=ice-pwd:D72E+7DA73ZpXav0OX6hqKc4\r\na=fingerprint:sha-256 63:80:36:6C:FF:B6:A3:90:EC:9E:A1:8F:88:55:BD:6B:BF:22:79:6D:7C:39:F5:28:95:30:20:D5:F4:72:CB:15\r\na=setup:actpass\r\na=mid:audio\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=sendrecv\r\na=rtcp-mux\r\na=rtpmap:111 opus/48000/2\r\na=rtcp-fb:111 transport-cc\r\na=fmtp:111 minptime=10;useinbandfec=1\r\na=rtpmap:103 ISAC/16000\r\na=rtpmap:104 ISAC/32000\r\na=rtpmap:9 G722/8000\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:106 CN/32000\r\na=rtpmap:105 CN/16000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:110 telephone-event/48000\r\na=rtpmap:112 telephone-event/32000\r\na=rtpmap:113 telephone-event/16000\r\na=rtpmap:126 telephone-event/8000\r\na=ssrc:3304842974 cname:RDB2zd7gCXXkfiVe\r\na=ssrc:3304842974 msid:lm6ZXIoswvVlvDppEx8maepQnVoZ9lIut18Y 86d1ad61-714c-4f3c-9925-aa05116dd348\r\na=ssrc:3304842974 mslabel:lm6ZXIoswvVlvDppEx8maepQnVoZ9lIut18Y\r\na=ssrc:3304842974 label:86d1ad61-714c-4f3c-9925-aa05116dd348\r\nm=video 9 UDP/TLS/RTP/SAVPF 96 98 100 102 127 97 99 101 125\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:p+PZ\r\na=ice-pwd:D72E+7DA73ZpXav0OX6hqKc4\r\na=fingerprint:sha-256 63:80:36:6C:FF:B6:A3:90:EC:9E:A1:8F:88:55:BD:6B:BF:22:79:6D:7C:39:F5:28:95:30:20:D5:F4:72:CB:15\r\na=setup:actpass\r\na=mid:video\r\na=extmap:2 urn:ietf:params:rtp-hdrext:toffset\r\na=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=extmap:4 urn:3gpp:video-orientation\r\na=extmap:5 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay\r\na=sendrecv\r\na=rtcp-mux\r\na=rtcp-rsize\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=rtpmap:98 VP9/90000\r\na=rtcp-fb:98 ccm fir\r\na=rtcp-fb:98 nack\r\na=rtcp-fb:98 nack pli\r\na=rtcp-fb:98 goog-remb\r\na=rtcp-fb:98 transport-cc\r\na=rtpmap:100 H264/90000\r\na=rtcp-fb:100 ccm fir\r\na=rtcp-fb:100 nack\r\na=rtcp-fb:100 nack pli\r\na=rtcp-fb:100 goog-remb\r\na=rtcp-fb:100 transport-cc\r\na=fmtp:100 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f\r\na=rtpmap:102 red/90000\r\na=rtpmap:127 ulpfec/90000\r\na=rtpmap:97 rtx/90000\r\na=fmtp:97 apt=96\r\na=rtpmap:99 rtx/90000\r\na=fmtp:99 apt=98\r\na=rtpmap:101 rtx/90000\r\na=fmtp:101 apt=100\r\na=rtpmap:125 rtx/90000\r\na=fmtp:125 apt=102\r\na=ssrc-group:FID 2063430829 1027334379\r\na=ssrc:2063430829 cname:RDB2zd7gCXXkfiVe\r\na=ssrc:2063430829 msid:lm6ZXIoswvVlvDppEx8maepQnVoZ9lIut18Y 66b9a8c4-e14d-4b31-ad79-c02819b2c179\r\na=ssrc:2063430829 mslabel:lm6ZXIoswvVlvDppEx8maepQnVoZ9lIut18Y\r\na=ssrc:2063430829 label:66b9a8c4-e14d-4b31-ad79-c02819b2c179\r\na=ssrc:1027334379 cname:RDB2zd7gCXXkfiVe\r\na=ssrc:1027334379 msid:lm6ZXIoswvVlvDppEx8maepQnVoZ9lIut18Y 66b9a8c4-e14d-4b31-ad79-c02819b2c179\r\na=ssrc:1027334379 mslabel:lm6ZXIoswvVlvDppEx8maepQnVoZ9lIut18Y\r\na=ssrc:1027334379 label:66b9a8c4-e14d-4b31-ad79-c02819b2c179\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=ice-ufrag:p+PZ\r\na=ice-pwd:D72E+7DA73ZpXav0OX6hqKc4\r\na=fingerprint:sha-256 63:80:36:6C:FF:B6:A3:90:EC:9E:A1:8F:88:55:BD:6B:BF:22:79:6D:7C:39:F5:28:95:30:20:D5:F4:72:CB:15\r\na=setup:actpass\r\na=mid:data\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n" }, "queued-packets": -1, "streams": [ { "id": 1, "ready": -1, "ssrc": { "audio": 3772083770, "video": 1648006521, "audio-peer": 3304842974, "video-peer": 2063430829, "video-peer-rtx": 1027334379 }, "direction": { "audio-send": true, "audio-recv": true, "video-send": true, "video-recv": true }, "codecs": { "audio-pt": 111, "audio-codec": "opus", "video-pt": 96, "video-codec": "vp8" }, "rtcp_stats": { "audio": { "base": 48000, "rtt": 118, "lost": 0, "lost-by-remote": 0, "jitter-local": 8, "jitter-remote": 7, "in-link-quality": 100, "in-media-link-quality": 100, "out-link-quality": 100, "out-media-link-quality": 100 }, "video": { "base": 90000, "rtt": 106, "lost": 0, "lost-by-remote": 0, "jitter-local": 12, "jitter-remote": 34, "in-link-quality": 100, "in-media-link-quality": 100, "out-link-quality": 100, "out-media-link-quality": 100 } }, "components": [ { "id": 1, "state": "ready", "connected": 1668173348678, "local-candidates": [ "1 1 udp 2013266431 34.210.153.174 39816 typ host" ], "remote-candidates": [ "2774440166 1 udp 1686052607 76.97.223.185 34677 typ srflx raddr 10.137.2.17 rport 34677 generation 0 ufrag p+PZ network-id 1 network-cost 50", "201398067 1 udp 2122260223 10.137.2.17 34677 typ host generation 0 ufrag p+PZ network-id 1 network-cost 50" ], "selected-pair": "1 <-> 2774440166", "dtls": { "fingerprint": "D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38", "remote-fingerprint": "63:80:36:6C:FF:B6:A3:90:EC:9E:A1:8F:88:55:BD:6B:BF:22:79:6D:7C:39:F5:28:95:30:20:D5:F4:72:CB:15", "remote-fingerprint-hash": "sha-256", "dtls-role": "active", "dtls-state": "connected", "retransmissions": 0, "valid": true, "ready": true, "handshake-started": 1668173348679, "connected": 1668173551000 }, "in_stats": { "audio_packets": 569, "audio_bytes": 49444, "audio_bytes_lastsec": 4655, "do_audio_nacks": false, "video_packets": 522, "video_bytes": 524820, "video_bytes_lastsec": 69439, "do_video_nacks": true, "video_nacks": 0, "data_packets": 3, "data_bytes": 2252 }, "out_stats": { "audio_packets": 569, "audio_bytes": 49444, "audio_bytes_lastsec": 4655, "audio_nacks": 0, "video_packets": 522, "video_bytes": 524820, "video_bytes_lastsec": 69439, "video_nacks": 0, "data_packets": 2, "data_bytes": 1249 } } ] } ] }
- and here's how it should look with 1 person in a video room
{ "session_id": 1402256714270032, "session_last_activity": 1668108452194, "session_transport": "janus.transport.http", "handle_id": 6890846667155791, "opaque_id": "videoroomtest-gQ0HkOODYlTn", "created": 1668098788364, "send_thread_created": true, "current_time": 1668115288567, "plugin": "janus.plugin.videoroom", "plugin_specific": { "type": "publisher", "room": 1234, "id": 8025939259403578, "private_id": 3814460675, "display": "tester", "media": { "audio": true, "audio_codec": "opus", "video": true, "video_codec": "vp8", "data": false }, "bitrate": 128000, "audio-level-dBov": 0, "talking": false, "hangingup": 0, "destroyed": 0 }, "flags": { "got-offer": true, "got-answer": true, "processing-offer": false, "starting": true, "ice-restart": false, "ready": true, "stopped": false, "alert": false, "trickle": true, "all-trickles": true, "resend-trickles": false, "trickle-synced": false, "data-channels": false, "has-audio": true, "has-video": true, "rfc4588-rtx": false, "cleaning": false }, "agent-created": 1668105120869, "ice-mode": "full", "ice-role": "controlled", "sdps": { "profile": "UDP/TLS/RTP/SAVPF", "local": "v=0\r\no=- 4889174595854780834 2 IN IP4 34.210.153.174\r\ns=VideoRoom 1234\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS janus\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111\r\nc=IN IP4 34.210.153.174\r\na=recvonly\r\na=mid:audio\r\na=rtcp-mux\r\na=ice-ufrag:Uzqo\r\na=ice-pwd:8HifUw7pi8Yo7sAjXkXBho\r\na=ice-options:trickle\r\na=fingerprint:sha-256 D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38\r\na=setup:active\r\na=rtpmap:111 opus/48000/2\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=candidate:1 1 udp 2013266431 34.210.153.174 52952 typ host\r\na=end-of-candidates\r\nm=video 9 UDP/TLS/RTP/SAVPF 96\r\nc=IN IP4 34.210.153.174\r\na=recvonly\r\na=mid:video\r\na=rtcp-mux\r\na=ice-ufrag:Uzqo\r\na=ice-pwd:8HifUw7pi8Yo7sAjXkXBho\r\na=ice-options:trickle\r\na=fingerprint:sha-256 D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38\r\na=setup:active\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=extmap:4 urn:3gpp:video-orientation\r\na=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay\r\na=candidate:1 1 udp 2013266431 34.210.153.174 52952 typ host\r\na=end-of-candidates\r\n", "remote": "v=0\r\no=- 4889174595854780834 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS UcQkRoy2WPdTsECg49jOztMxnWrHKJX4fQ7G\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:Zvcf\r\na=ice-pwd:+deoKfh3lfJijEJjllJSl+Uy\r\na=fingerprint:sha-256 12:1E:EE:EA:79:6C:10:0B:F1:CF:BA:36:9B:CA:06:2E:DD:9F:27:94:BE:59:D5:F3:41:33:ED:8C:B7:B8:BD:BE\r\na=setup:actpass\r\na=mid:audio\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=sendonly\r\na=rtcp-mux\r\na=rtpmap:111 opus/48000/2\r\na=rtcp-fb:111 transport-cc\r\na=fmtp:111 minptime=10;useinbandfec=1\r\na=rtpmap:103 ISAC/16000\r\na=rtpmap:104 ISAC/32000\r\na=rtpmap:9 G722/8000\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:106 CN/32000\r\na=rtpmap:105 CN/16000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:110 telephone-event/48000\r\na=rtpmap:112 telephone-event/32000\r\na=rtpmap:113 telephone-event/16000\r\na=rtpmap:126 telephone-event/8000\r\na=ssrc:1148435629 cname:GyI76iRrqiXyq78e\r\na=ssrc:1148435629 msid:UcQkRoy2WPdTsECg49jOztMxnWrHKJX4fQ7G b57f5c0e-5053-4075-94af-be03835fbe73\r\na=ssrc:1148435629 mslabel:UcQkRoy2WPdTsECg49jOztMxnWrHKJX4fQ7G\r\na=ssrc:1148435629 label:b57f5c0e-5053-4075-94af-be03835fbe73\r\nm=video 9 UDP/TLS/RTP/SAVPF 96 98 100 102 127 97 99 101 125\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:Zvcf\r\na=ice-pwd:+deoKfh3lfJijEJjllJSl+Uy\r\na=fingerprint:sha-256 12:1E:EE:EA:79:6C:10:0B:F1:CF:BA:36:9B:CA:06:2E:DD:9F:27:94:BE:59:D5:F3:41:33:ED:8C:B7:B8:BD:BE\r\na=setup:actpass\r\na=mid:video\r\na=extmap:2 urn:ietf:params:rtp-hdrext:toffset\r\na=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=extmap:4 urn:3gpp:video-orientation\r\na=extmap:5 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay\r\na=sendonly\r\na=rtcp-mux\r\na=rtcp-rsize\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=rtpmap:98 VP9/90000\r\na=rtcp-fb:98 ccm fir\r\na=rtcp-fb:98 nack\r\na=rtcp-fb:98 nack pli\r\na=rtcp-fb:98 goog-remb\r\na=rtcp-fb:98 transport-cc\r\na=rtpmap:100 H264/90000\r\na=rtcp-fb:100 ccm fir\r\na=rtcp-fb:100 nack\r\na=rtcp-fb:100 nack pli\r\na=rtcp-fb:100 goog-remb\r\na=rtcp-fb:100 transport-cc\r\na=fmtp:100 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f\r\na=rtpmap:102 red/90000\r\na=rtpmap:127 ulpfec/90000\r\na=rtpmap:97 rtx/90000\r\na=fmtp:97 apt=96\r\na=rtpmap:99 rtx/90000\r\na=fmtp:99 apt=98\r\na=rtpmap:101 rtx/90000\r\na=fmtp:101 apt=100\r\na=rtpmap:125 rtx/90000\r\na=fmtp:125 apt=102\r\na=ssrc-group:FID 3236656850 3421542791\r\na=ssrc:3236656850 cname:GyI76iRrqiXyq78e\r\na=ssrc:3236656850 msid:UcQkRoy2WPdTsECg49jOztMxnWrHKJX4fQ7G 2bcf83cc-4452-4e59-916b-ef827f43ddb7\r\na=ssrc:3236656850 mslabel:UcQkRoy2WPdTsECg49jOztMxnWrHKJX4fQ7G\r\na=ssrc:3236656850 label:2bcf83cc-4452-4e59-916b-ef827f43ddb7\r\na=ssrc:3421542791 cname:GyI76iRrqiXyq78e\r\na=ssrc:3421542791 msid:UcQkRoy2WPdTsECg49jOztMxnWrHKJX4fQ7G 2bcf83cc-4452-4e59-916b-ef827f43ddb7\r\na=ssrc:3421542791 mslabel:UcQkRoy2WPdTsECg49jOztMxnWrHKJX4fQ7G\r\na=ssrc:3421542791 label:2bcf83cc-4452-4e59-916b-ef827f43ddb7\r\n" }, "queued-packets": -1, "streams": [ { "id": 1, "ready": -1, "ssrc": { "audio": 2938984141, "video": 942089758, "audio-peer": 1148435629, "video-peer": 3236656850, "video-peer-rtx": 3421542791 }, "direction": { "audio-send": false, "audio-recv": true, "video-send": false, "video-recv": true }, "codecs": { "audio-pt": 111, "audio-codec": "opus", "video-pt": 96, "video-codec": "vp8" }, "rtcp_stats": { "audio": { "base": 48000, "rtt": 0, "lost": 0, "lost-by-remote": 0, "jitter-local": 3, "jitter-remote": 0, "in-link-quality": 100, "in-media-link-quality": 100, "out-link-quality": 0, "out-media-link-quality": 0 }, "video": { "base": 90000, "rtt": 0, "lost": 0, "lost-by-remote": 0, "jitter-local": 26, "jitter-remote": 0, "in-link-quality": 100, "in-media-link-quality": 100, "out-link-quality": 0, "out-media-link-quality": 0 } }, "components": [ { "id": 1, "state": "connected", "connected": 1668107173208, "local-candidates": [ "1 1 udp 2013266431 34.210.153.174 52952 typ host" ], "remote-candidates": [ "201398067 1 udp 2122260223 10.137.2.17 44959 typ host generation 0 ufrag Zvcf network-id 1 network-cost 50", "2774440166 1 udp 1686052607 76.97.223.185 44959 typ srflx raddr 10.137.2.17 rport 44959 generation 0 ufrag Zvcf network-id 1 network-cost 50" ], "selected-pair": "1 <-> 2774440166", "dtls": { "fingerprint": "D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38", "remote-fingerprint": "12:1E:EE:EA:79:6C:10:0B:F1:CF:BA:36:9B:CA:06:2E:DD:9F:27:94:BE:59:D5:F3:41:33:ED:8C:B7:B8:BD:BE", "remote-fingerprint-hash": "sha-256", "dtls-role": "active", "dtls-state": "connected", "retransmissions": 0, "valid": true, "ready": true, "handshake-started": 1668107173210, "connected": 1668107359694 }, "in_stats": { "audio_packets": 396, "audio_bytes": 37431, "audio_bytes_lastsec": 4744, "do_audio_nacks": false, "video_packets": 160, "video_bytes": 129569, "video_bytes_lastsec": 17622, "do_video_nacks": true, "video_nacks": 0, "data_packets": 3, "data_bytes": 2250 }, "out_stats": { "audio_packets": 0, "audio_bytes": 0, "audio_bytes_lastsec": 0, "audio_nacks": 0, "video_packets": 0, "video_bytes": 0, "video_bytes_lastsec": 0, "video_nacks": 0, "data_packets": 2, "data_bytes": 1249 } } ] } ] }
- I added another laptop to the video room, and it appears to work fine--not sure if it's going through the server or just locally, though. Anyway, in this case there was 2x sessions each with 2x handles. One handle was marked as a type = subscriber and the other as a type=publisher.
Tue May 08, 2018
- whitelisted mod_security rule = '960024' to fix Forbidden message when Marcin commented on a post on osemain
Mon May 07, 2018
- logged time + updated Current Meeting
Fri May 04, 2018
- Marcin completed the first-draft of the wiki migration test plan last night
http://opensourceecology.org/wiki/Wiki_Validation
- I made somex edits & additions to the test plan
- I sent an email to Marcin asking for clarification on a few of the items of the test plan
- after Marcin responds, we'll go through the checklist on the staging site. If it passes, then we'll schedule a CHG date & time.
- In the meantime, I'll continue with the jangouts POC by trying to put the html/js demos vhost behind a quick self-signed https cert
mkdir /etc/ssl/private/ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt vim /etc/nginx/conf.d/jangouts.opensourceecology.org
- that eliminated the previous security error, but now I'm getting "Probably a network error, is the gateway down?: [object Object]" again
- the dev console shows "Using REST API to contact Janus: https://jangouts.opensourceecology.org:8089/janus / Failed to load resource: net::ERR_CONNECTION_TIMED_OUT" so probably I need to change the gateway to use https as well..
[root@ip-172-31-28-115 janus]# cp janus.transport.http.cfg janus.transport.http.cfg.orig [root@ip-172-31-28-115 janus]# vim janus.transport.http.cfg [root@ip-172-31-28-115 janus]# diff janus.transport.http.cfg.orig janus.transport.http.cfg 22c22 < https = no ; Whether to enable HTTPS (default=no) --- > https = yes ; Whether to enable HTTPS (default=no) [root@ip-172-31-28-115 janus]#
- I also had to update the security group to permit 8089 in (it was 8088 before with http). After I did this, the error changed from a delayed timeout to an immediate "janus.js:74 OPTIONS https://jangouts.opensourceecology.org:8089/janus net::ERR_INSECURE_RESPONSE"
- this is because it's a self-signed cert on a different "server" (diferent port) than the one we created an exception for. The way around this is to just hit "https://jangouts.opensourceecology.org:8089/janus" in the browser once, approve the click exception (click "proceed to ...") and then try again
- now I can load all the demos, but nothing appears to actually work. for example, when I click the 'publish' button in the Video Room demo, it goes away after a few seconds
- here's what I see in the browser's console
janus.js:3064 isTrickleEnabled: undefined janus.js:2987 isAudioSendEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} videoroomtest.js:109 Consent dialog should be on now janus.js:2987 isAudioSendEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} janus.js:3020 isVideoSendEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} janus.js:2987 isAudioSendEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} janus.js:3020 isVideoSendEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} janus.js:2998 isAudioSendRequired: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} janus.js:3031 isVideoSendRequired: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} janus.js:2106 getUserMedia constraints Object {audio: true, video: false} videoroomtest.js:109 Consent dialog should be off now janus.js:1406 streamsDone: MediaStream {id: "r6H7MBqVvdSoPCKa3pvdKK9cevO4aGCEuKDu", active: true, onaddtrack: null, onremovetrack: null, onactive: null…} janus.js:1408 -- Audio tracks: [MediaStreamTrack] janus.js:1409 -- Video tracks: [] janus.js:1518 Creating PeerConnection janus.js:1519 Object {optional: Array(1)} janus.js:1521 RTCPeerConnection {remoteDescription: RTCSessionDescription, signalingState: "stable", iceGatheringState: "new", iceConnectionState: "new", onnegotiationneeded: null…} janus.js:1526 Preparing local SDP and gathering candidates (trickle=true) janus.js:1577 Adding local stream janus.js:1579 Adding local track: MediaStreamTrack {kind: "audio", id: "f4d9b3bf-de5a-4b69-b2bf-f5135f2e7341", label: "Default", enabled: true, muted: false…} janus.js:3053 isDataEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} videoroomtest.js:276 ::: Got a local stream ::: videoroomtest.js:278 MediaStream {id: "r6H7MBqVvdSoPCKa3pvdKK9cevO4aGCEuKDu", active: true, onaddtrack: null, onremovetrack: null, onactive: null…} janus.js:2183 Creating offer (iceDone=false) janus.js:3009 isAudioRecvEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} janus.js:3042 isVideoRecvEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} janus.js:2282 Object {offerToReceiveAudio: false, offerToReceiveVideo: false} janus.js:3020 isVideoSendEnabled: Object {audioRecv: false, videoRecv: false, audioSend: true, videoSend: true, update: false} ↵"}9706 label:f4d9b3bf-de5a-4b69-b2bf-f5135f2e7341, sdp: "v=0 janus.js:2301 Setting local description janus.js:2319 Offer ready janus.js:2320 Object {media: Object, simulcast: false, success: function, error: function} videoroomtest.js:402 Got publisher SDP! ↵"}9706 label:f4d9b3bf-de5a-4b69-b2bf-f5135f2e7341v=0 janus.js:1132 Sending message to plugin (handle=1762304507750846): janus.js:1133 Object {janus: "message", body: Object, transaction: "lG8BwROpEMYy", jsep: Object} janus.js:1229 Sending trickle candidate (handle=1762304507750846): janus.js:1230 Object {janus: "trickle", candidate: Object, transaction: "avyXGgAHSYaW"} janus.js:1229 Sending trickle candidate (handle=1762304507750846): janus.js:1230 Object {janus: "trickle", candidate: Object, transaction: "MS94i2gXf9wU"} janus.js:1229 Sending trickle candidate (handle=1762304507750846): janus.js:1230 Object {janus: "trickle", candidate: Object, transaction: "AnnWqBAjyOoe"} janus.js:1229 Sending trickle candidate (handle=1762304507750846): janus.js:1230 Object {janus: "trickle", candidate: Object, transaction: "SWyGgmyBz5ze"} janus.js:1229 Sending trickle candidate (handle=1762304507750846): janus.js:1230 Object {janus: "trickle", candidate: Object, transaction: "t3tlOjUVQ6Yc"} janus.js:1229 Sending trickle candidate (handle=1762304507750846): janus.js:1230 Object {janus: "trickle", candidate: Object, transaction: "NIY0ynxdRiwm"} janus.js:1534 End of candidates. janus.js:1229 Sending trickle candidate (handle=1762304507750846): janus.js:1230 Object {janus: "trickle", candidate: Object, transaction: "4TjYcHlKlWs6"} janus.js:1175 Message sent! janus.js:1176 Object {janus: "ack", session_id: 7298240886377737, transaction: "lG8BwROpEMYy"} janus.js:610 Got a plugin event on session 7298240886377737 janus.js:611 Object {janus: "event", session_id: 7298240886377737, transaction: "lG8BwROpEMYy", sender: 1762304507750846, plugindata: Object…} janus.js:622 -- Event is coming from 1762304507750846 (janus.plugin.videoroom) janus.js:624 Object {videoroom: "event", room: 1234, configured: "ok", audio_codec: "opus"} janus.js:632 Handling SDP as well... ↵"}end-of-candidates1864 2 IN IP4 172.31.28.11…2.31.28.115 40693 typ host janus.js:637 Notifying application... videoroomtest.js:149 ::: Got a message (publisher) ::: videoroomtest.js:150 Object {videoroom: "event", room: 1234, configured: "ok", audio_codec: "opus"} videoroomtest.js:152 Event: event videoroomtest.js:251 Handling SDP as well... ↵"}end-of-candidates1864 2 IN IP4 172.31.28.11…2.31.28.115 40693 typ host janus.js:2144 Remote description accepted! janus.js:1242 Candidate sent! janus.js:1243 Object {janus: "ack", session_id: 7298240886377737, transaction: "avyXGgAHSYaW"} janus.js:1242 Candidate sent! janus.js:1243 Object {janus: "ack", session_id: 7298240886377737, transaction: "MS94i2gXf9wU"} janus.js:410 Long poll... janus.js:1242 Candidate sent! janus.js:1243 Object {janus: "ack", session_id: 7298240886377737, transaction: "AnnWqBAjyOoe"} janus.js:1242 Candidate sent! janus.js:1243 Object {janus: "ack", session_id: 7298240886377737, transaction: "SWyGgmyBz5ze"} janus.js:1242 Candidate sent! janus.js:1243 Object {janus: "ack", session_id: 7298240886377737, transaction: "t3tlOjUVQ6Yc"} janus.js:1242 Candidate sent! janus.js:1243 Object {janus: "ack", session_id: 7298240886377737, transaction: "NIY0ynxdRiwm"} janus.js:1242 Candidate sent! janus.js:1243 Object {janus: "ack", session_id: 7298240886377737, transaction: "4TjYcHlKlWs6"} janus.js:535 Got a hangup event on session 7298240886377737 janus.js:536 Object {janus: "hangup", session_id: 7298240886377737, sender: 1762304507750846, reason: "ICE failed"} videoroomtest.js:131 Janus says our WebRTC PeerConnection is down now janus.js:2708 Cleaning WebRTC stuff janus.js:2753 Stopping local stream tracks janus.js:2757 MediaStreamTrack {kind: "audio", id: "f4d9b3bf-de5a-4b69-b2bf-f5135f2e7341", label: "Default", enabled: true, muted: false…} videoroomtest.js:324 ::: Got a cleanup notification: we are unpublished now ::: janus.js:410 Long poll... janus.js:455 Got a keepalive on session 7298240886377737 janus.js:410 Long poll...
- and here's what I see output to the terminal where janus is running (not as a daemon)
[1762304507750846] Creating ICE agent (ICE Full mode, controlled) [ERR] [sdp-utils.c:janus_sdp_get_codec_rtpmap:718] Unsupported codec 'none' [WARN] [1762304507750846] ICE failed for component 1 in stream 1, but let's give it some time... (trickle received, answer received, alert not set) [ERR] [ice.c:janus_ice_check_failed:1428] [1762304507750846] ICE failed for component 1 in stream 1... [janus.plugin.videoroom-0x7fbf44003e30] No WebRTC media anymore; 0x7fbf44004aa0 0x7fbf44005780 [1762304507750846] WebRTC resources freed; 0x7fbf44004aa0 0x7fbf44004980
- I tried the echo test demo from another machine that actually has a camera. It showed me, but not the echo. Here's what the janus output was on the server
Creating new session: 607187205577013; 0x7fbf44003400 Creating new handle in session 607187205577013: 5995577680857670; 0x7fbf44003400 0x7fbf44007e30 [5995577680857670] Creating ICE agent (ICE Full mode, controlled) [WARN] [5995577680857670] Skipping disabled/unsupported media line... [WARN] [5995577680857670] Skipping disabled/unsupported media line... [WARN] [5995577680857670] ICE failed for component 1 in stream 1, but let's give it some time... (trickle received, answer received, alert not set) [ERR] [ice.c:janus_ice_check_failed:1428] [5995577680857670] ICE failed for component 1 in stream 1... [janus.plugin.echotest-0x7fbf44007d00] No WebRTC media anymore [5995577680857670] WebRTC resources freed; 0x7fbf44007e30 0x7fbf44003400 Destroying session 607187205577013; 0x7fbf44003400 Detaching handle from JANUS EchoTest plugin; 0x7fbf44007e30 0x7fbf44007d00 0x7fbf44007e30 0x7fbf4400fbb0 [5995577680857670] WebRTC resources freed; 0x7fbf44007e30 0x7fbf44003400 [5995577680857670] Handle and related resources freed; 0x7fbf44007e30 0x7fbf44003400
- and here's the server-side output from running the recordplaytest demo
- this came when just loading then recording, which appeared fine
Creating new session: 5857587399713942; 0x7fbf44003400 Creating new handle in session 5857587399713942: 4667944641759999; 0x7fbf44003400 0x7fbf44003a00 [4667944641759999] Creating ICE agent (ICE Full mode, controlled) [WARN] Audio codec: opus [WARN] Video codec: vp8 [WARN] [4667944641759999] ICE failed for component 1 in stream 1, but let's give it some time... (trickle received, answer received, alert not set) [ERR] [ice.c:janus_ice_check_failed:1428] [4667944641759999] ICE failed for component 1 in stream 1... [janus.plugin.recordplay-0x7fbf4400e090] No WebRTC media anymore File is 8 bytes: rec-6957767953715790-audio.mjr Closed audio recording rec-6957767953715790-audio.mjr File is 8 bytes: rec-6957767953715790-video.mjr Closed video recording rec-6957767953715790-video.mjr [4667944641759999] WebRTC resources freed; 0x7fbf44003a00 0x7fbf44003400
- and this came when finding the recording in the list & attempting to play it back. This part didn't work at all, and the browser said "Error opening recording files"
[WARN] Error opening audio recording, trying to go on anyway [WARN] Error opening video recording, trying to go on anyway
- and when I attempted to load the videoroomtest from a computer that actually has a camera, I would see myself for a bit, but then it would disappear. I could click "Publish", and it would do the same show me, then disappear. Here's what the server showed as this happened:
Creating new session: 18614750988061; 0x7fbf440028f0 Creating new handle in session 18614750988061: 3771387103782624; 0x7fbf440028f0 0x7fbf44003a00 [WARN] [3771387103782624] No stream, queueing this trickle as it got here before the SDP... [3771387103782624] Creating ICE agent (ICE Full mode, controlled) Timeout expired for session 6914064699623145... Detaching handle from JANUS VideoCall plugin; 0x7fbf4400e0c0 0x7fbf44002100 0x7fbf4400e0c0 0x7fbf4400bc30 [janus.plugin.videocall-0x7fbf44002100] No WebRTC media anymore [7781385739946092] WebRTC resources freed; 0x7fbf4400e0c0 0x7fbf44004390 [7781385739946092] Handle and related resources freed; 0x7fbf4400e0c0 0x7fbf44004390 Destroying session 6914064699623145; 0x7fbf44004390 [WARN] [3771387103782624] ICE failed for component 1 in stream 1, but let's give it some time... (trickle received, answer received, alert not set) [ERR] [ice.c:janus_ice_check_failed:1428] [3771387103782624] ICE failed for component 1 in stream 1... [janus.plugin.videoroom-0x7fbf4400e090] No WebRTC media anymore; 0x7fbf44003a00 0x7fbf44005780 [3771387103782624] WebRTC resources freed; 0x7fbf44003a00 0x7fbf440028f0
- I did some reading about ICE and debugging janus
- that pointed me to some great resources for debugging webrtc on the browser's end
- in chrome, use chrome://webrtc-internals
- in firefox, use about:webrtc
- in either, this is a good site provided by janus (who apparently provides streaming services for the IETF) for testing the client https://selftest.conf.meetecho.com/test/
- after the above research, I bet the issue is that I haven't opened the necessary UDP ports to our janus server. I've only touched TCP stuff coming in, and these ICE/STUN/TURN components likely will need UDP opened-up
- also note that, even though we're using a cloud provider here, I am explicitly using the public ip address for our server, so only the client should be traversing NAT here
user@personal:~$ dig jangouts.opensourceecology.org ; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> jangouts.opensourceecology.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43410 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;jangouts.opensourceecology.org. IN A ;; ANSWER SECTION: jangouts.opensourceecology.org. 300 IN A 34.210.153.174 ;; Query time: 2229 msec ;; SERVER: 10.137.5.1#53(10.137.5.1) ;; WHEN: Fri May 04 18:07:54 EDT 2018 ;; MSG SIZE rcvd: 75 user@personal:~$
- it appears that janus is running on two udp ports = 5002 & 5004
[root@ip-172-31-28-115 htdocs]# ss -planu | grep -i janus UNCONN 0 0 *:5002 *:* users:(("janus",pid=6070,fd=5)) UNCONN 0 0 *:5004 *:* users:(("janus",pid=6070,fd=6)) [root@ip-172-31-28-115 htdocs]#
- that didn't help
- I also spent some time researching getting qubes to forward my camera to a new 'conference' appvm so that I can do this testing better. I got it working in chromium, but not firefox
- it appears to not be possible to actually enumerate all microphone & camera devices in firefox directly, but this page helps with that https://www.webrtc-experiment.com/demos/MediaStreamTrack.getSources.html
- in chromium, you can use the above link or go to chome://settings > advanced > content settings. there you can see a drop-down showing both the available microphones & cameras.
- I got it working, so now I can fully test on my laptop running QubesOS
- I found a research paper analyzing the performance of Janus with 10 publishers + 90 subscribers in the VideoRoom plugin, which is very near to what we want to accomplish at OSE https://www.researchgate.net/publication/300727546_Performance_analysis_of_the_Janus_WebRTC_gateway
- I also stumbled on 2x more janus/jitis alternatives:
- kurento https://doc-kurento.readthedocs.io/en/stable/
- they provide a way to install in AWS CloudFormation and Ubuntu only https://doc-kurento.readthedocs.io/en/stable/user/installation.html#local-installation
- this project has excellent documentation, and they stated that all udp ports must be open to run a STUN server.. 0-65535
- licode http://lynckia.com/licode/
- this can be installed from a docker image, which may be less (or more?) of a headache http://licode.readthedocs.io/en/stable/docker/
- I still think that Jangouts is probably our best solution afaict
- otherwise, they only support installs on ubuntu http://licode.readthedocs.io/en/stable/from_source/
- I enabled all UDP ports coming-in on the security group; this helped, but I'm still having issues
Destroying session 33940990778352; 0x7f2f5802e800 Detaching handle from JANUS Record&Play plugin; 0x7f2f58013a70 0x7f2f58014030 0x7f2f58013a70 0x7f2f58014060 [ERR] [ice.c:janus_plugin_session_is_alive:401] Invalid plugin session (0x7f2f58014030) [807900953279518] WebRTC resources freed; 0x7f2f58013a70 0x7f2f5802e800 [807900953279518] Handle and related resources freed; 0x7f2f58013a70 0x7f2f5802e800 Creating new session: 2711508791369859; 0x7f2f5802e800 Creating new handle in session 2711508791369859: 492301824865738; 0x7f2f5802e800 0x7f2f5802d880 [492301824865738] Creating ICE agent (ICE Full mode, controlled) [WARN] [492301824865738] ICE failed for component 1 in stream 1, but let's give it some time... (trickle received, answer received, alert not set) [ERR] [ice.c:janus_ice_check_failed:1428] [492301824865738] ICE failed for component 1 in stream 1... [janus.plugin.videoroom-0x7f2f58014600] No WebRTC media anymore; 0x7f2f5802d880 0x7f2f58012f80 [492301824865738] WebRTC resources freed; 0x7f2f5802d880 0x7f2f5802e800
- to get better debugging info, I should enable the Janus Admin API per this guide http://www.meetecho.com/blog/understanding-the-janus-admin-api/
Thr May 03, 2018
- began investigating a POC for jangouts, which uses the janus video gateway and the videoroom plugin as a self-hosted google-hangouts-like alternative https://janus.conf.meetecho.com/docs/README.html
- first configure attempt failed
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus ... checking for JANUS... no configure: error: Package requirements ( glib-2.0 >= 2.34 nice jansson >= 2.5 libssl >= 1.0.1 libcrypto ) were not met: No package 'glib-2.0' found No package 'nice' found No package 'jansson' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables JANUS_CFLAGS and JANUS_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. [root@ip-172-31-28-115 janus-gateway]#
- fixed the above issues by also installing some additional packages. some of them required enabling all repos explicitly
yum install -y glibc2-devel yum --enablerepo=* -y install libnice-devel
- another error
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus ... configure: error: libsrtp and libsrtp2 not found. See README.md for installation instructions [root@ip-172-31-28-115 janus-gateway]#
- and the attempted fix for libsrtp2, but it didn't help
yum --enablerepo=* -y install libnice-devel
- it requires a newer version (>=1.5.0) that what is in the repo (1.4.4)
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus ... checking for SRTP15X... no configure: error: Package requirements ( libsrtp >= 1.5.0 ) were not met: Requested 'libsrtp >= 1.5.0' but version of libsrtp is 1.4.4 You may find new versions of libsrtp at http://srtp.sourceforge.net Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables SRTP15X_CFLAGS and SRTP15X_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. [root@ip-172-31-28-115 janus-gateway]#
- tried a manual install
# remove libsrtp from the repos as it's too old yum remove -y libsrtp # attempt to install from source mkdir -p $HOME/src pushd $HOME/src wget https://github.com/cisco/libsrtp/archive/v1.5.4.tar.gz tar xfv v1.5.4.tar.gz cd libsrtp-1.5.4 ./configure --prefix=/usr --enable-openssl make shared_library && sudo make install
- that still failed; it was unable to install
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus ... checking for SRTP15X... no configure: error: Package requirements ( libsrtp >= 1.5.0 ) were not met: No package 'libsrtp' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables SRTP15X_CFLAGS and SRTP15X_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. [root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus --libdir=/usr/lib/
- so I tried version 2.0, first uninstalling 1.5.4
[root@ip-172-31-28-115 libsrtp-1.5.4]# make uninstall rm -f /usr/include/srtp/*.h rm -f /usr/lib/libsrtp.* rmdir /usr/include/srtp if [ "libsrtp.pc" != "" ]; then \ rm -f /usr/lib/pkgconfig/libsrtp.pc; \ fi [root@ip-172-31-28-115 libsrtp-1.5.4]#mkdir -p $HOME/src pushd $HOME/src wget https://github.com/cisco/libsrtp/archive/v2.0.0.tar.gz tar xfv v2.0.0.tar.gz cd libsrtp-2.0.0 ./configure --prefix=/usr --enable-openssl make shared_library && sudo make install
- that worked. next issue is lua-libs
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus ... checking for dlopen in -ldl... yes checking for srtp_init in -lsrtp2... yes checking srtp2/srtp.h usability... yes checking srtp2/srtp.h presence... yes checking for srtp2/srtp.h... yes checking for srtp_crypto_policy_set_aes_gcm_256_16_auth in -lsrtp2... yes checking for usrsctp_finish in -lusrsctp... no checking for LIBCURL... yes checking for doxygen... no checking for dot... no checking for gengetopt... yes checking for TRANSPORTS... yes checking for MHD... no checking for lws_create_vhost in -lwebsockets... no checking for amqp_error_string2 in -lrabbitmq... no checking for MQTTAsync_create in -lpaho-mqtt3a... no checking for PLUGINS... yes checking for SOFIA... no checking for LIBRE... no checking for LIBRE... no checking for OPUS... no checking for OGG... no checking for LUA... no checking for LUA... no configure: error: lua-libs not found. See README.md for installation instructions or use --disable-plugin-lua [root@ip-172-31-28-115 janus-gateway]#
- fixed by installing from all repos again
yum --enablerepo=* -y install lua-devel
- and that worked!
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus ... config.status: executing libtool commands libsrtp version: 2.x SSL/crypto library: OpenSSL DTLS set-timeout: not available DataChannels support: no Recordings post-processor: no TURN REST API client: yes Doxygen documentation: no Transports: REST (HTTP/HTTPS): no WebSockets: no RabbitMQ: no MQTT: no Unix Sockets: yes Plugins: Echo Test: yes Streaming: yes Video Call: yes SIP Gateway (Sofia): no SIP Gateway (libre): no NoSIP (RTP Bridge): yes Audio Bridge: no Video Room: yes Voice Mail: no Record&Play: yes Text Room: yes Lua Interpreter: yes Event handlers: Sample event handler: yes RabbitMQ event handler:no JavaScript modules: no If this configuration is ok for you, do a 'make' to start building Janus. A 'make install' will install Janus and its plugins to the specified prefix. Finally, a 'make configs' will install some sample configuration files too (something you'll only want to do the first time, though). [root@ip-172-31-28-115 janus-gateway]#
- both make & make install worked too!
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig -n /opt/janus/lib/janus/plugins ---------------------------------------------------------------------- Libraries have been installed in: /opt/janus/lib/janus/plugins If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,-rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- /bin/mkdir -p '/opt/janus/include/janus/plugins' /bin/install -c -m 644 plugins/plugin.h '/opt/janus/include/janus/plugins' /bin/mkdir -p '/opt/janus/share/janus/recordings' /bin/install -c -m 644 plugins/recordings/1234.nfo plugins/recordings/rec-sample-audio.mjr plugins/recordings/rec-sample-video.mjr '/opt/janus/share/janus/recordings' /bin/mkdir -p '/opt/janus/share/janus/streams' /bin/install -c -m 644 plugins/streams/music.mulaw plugins/streams/radio.alaw plugins/streams/test_gstreamer.sh plugins/streams/test_gstreamer_1.sh '/opt/janus/share/janus/streams' /bin/mkdir -p '/opt/janus/lib/janus/transports' /bin/sh ./libtool --mode=install /bin/install -c transports/libjanus_pfunix.la '/opt/janus/lib/janus/transports' libtool: install: /bin/install -c transports/.libs/libjanus_pfunix.so.0.0.0 /opt/janus/lib/janus/transports/libjanus_pfunix.so.0.0.0 libtool: install: (cd /opt/janus/lib/janus/transports && { ln -s -f libjanus_pfunix.so.0.0.0 libjanus_pfunix.so.0 || { rm -f libjanus_pfunix.so.0 && ln -s libjanus_pfunix.so.0.0.0 libjanus_pfunix.so.0; }; }) libtool: install: (cd /opt/janus/lib/janus/transports && { ln -s -f libjanus_pfunix.so.0.0.0 libjanus_pfunix.so || { rm -f libjanus_pfunix.so && ln -s libjanus_pfunix.so.0.0.0 libjanus_pfunix.so; }; }) libtool: install: /bin/install -c transports/.libs/libjanus_pfunix.lai /opt/janus/lib/janus/transports/libjanus_pfunix.la libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig -n /opt/janus/lib/janus/transports ---------------------------------------------------------------------- Libraries have been installed in: /opt/janus/lib/janus/transports If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,-rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- /bin/mkdir -p '/opt/janus/include/janus/transports' /bin/install -c -m 644 transports/transport.h '/opt/janus/include/janus/transports' make[3]: Leaving directory `/root/sandbox/janus-gateway' make[2]: Leaving directory `/root/sandbox/janus-gateway' make[1]: Leaving directory `/root/sandbox/janus-gateway' [root@ip-172-31-28-115 janus-gateway]#
- but then it failed to start, citing libsrtp issues again
[root@ip-172-31-28-115 janus-gateway]# /opt/janus/bin/janus /opt/janus/bin/janus: error while loading shared libraries: libsrtp2.so.1: cannot open shared object file: No such file or directory [root@ip-172-31-28-115 janus-gateway]# /opt/janus/bin/janus --help /opt/janus/bin/janus: error while loading shared libraries: libsrtp2.so.1: cannot open shared object file: No such file or directory [root@ip-172-31-28-115 janus-gateway]#
- I tried again using the '--libdir=/usr/lib64' or '--libdir=/usr/lib' flag during configure, per the README's recommendations, but that wasn't particularly helpful
make uninstall make make install make configs
- indeed, it's at "/usr/lib"
[root@ip-172-31-28-115 janus-gateway]# find /usr -name libsrtp2.so.1 /usr/lib/libsrtp2.so.1 [root@ip-172-31-28-115 janus-gateway]#
- I got it to run by setting the LD_LIBRARY_PATH per https://groups.google.com/forum/#!topic/meetecho-janus/fznCh3UYSCg
[root@ip-172-31-28-115 janus-gateway]# LD_LIBRARY_PATH=/usr/lib && /opt/janus/bin/janus --help Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd Compiled on: Thu May 3 22:03:54 UTC 2018 janus 0.4.0 Usage: janus [OPTIONS]... -h, --help Print help and exit -V, --version Print version and exit -b, --daemon Launch Janus in background as a daemon (default=off) -p, --pid-file=path Open the specified PID file when starting Janus (default=none) -N, --disable-stdout Disable stdout based logging (default=off) -L, --log-file=path Log to the specified file (default=stdout only) -i, --interface=ipaddress Interface to use (will be the public IP) -P, --plugins-folder=path Plugins folder (default=./plugins) -C, --config=filename Configuration file to use -F, --configs-folder=path Configuration files folder (default=./conf) -c, --cert-pem=filename DTLS certificate -k, --cert-key=filename DTLS certificate key -K, --cert-pwd=text DTLS certificate key passphrase (if needed) -S, --stun-server=ip:port STUN server(:port) to use, if needed (e.g., gateway behind NAT, default=none) -1, --nat-1-1=ip Public IP to put in all host candidates, assuming a 1:1 NAT is in place (e.g., Amazon EC2 instances, default=none) -E, --ice-enforce-list=list Comma-separated list of the only interfaces to use for ICE gathering; partial strings are supported (e.g., eth0 or eno1,wlan0, default=none) -X, --ice-ignore-list=list Comma-separated list of interfaces or IP addresses to ignore for ICE gathering; partial strings are supported (e.g., vmnet8,192.168.0.1,10.0.0.1 or vmnet,192.168., default=vmnet) -6, --ipv6-candidates Whether to enable IPv6 candidates or not (experimental) (default=off) -l, --libnice-debug Whether to enable libnice debugging or not (default=off) -f, --full-trickle Do full-trickle instead of half-trickle (default=off) -I, --ice-lite Whether to enable the ICE Lite mode or not (default=off) -T, --ice-tcp Whether to enable ICE-TCP or not (warning: only works with ICE Lite) (default=off) -R, --rfc-4588 Whether to enable RFC4588 retransmissions support or not (default=off) -q, --max-nack-queue=number Maximum size of the NACK queue (in ms) per user for retransmissions -t, --no-media-timer=number Time (in s) that should pass with no media (audio or video) being received before Janus notifies you about this -r, --rtp-port-range=min-max Port range to use for RTP/RTCP -n, --server-name=name Public name of this Janus instance (default=MyJanusInstance) -s, --session-timeout=number Session timeout value, in seconds (default=60) -m, --reclaim-session-timeout=number Reclaim session timeout value, in seconds (default=0) -d, --debug-level=1-7 Debug/logging level (0=disable debugging, 7=maximum debug level; default=4) -D, --debug-timestamps Enable debug/logging timestamps (default=off) -o, --disable-colors Disable color in the logging (default=off) -a, --apisecret=randomstring API secret all requests need to pass in order to be accepted by Janus (useful when wrapping Janus API requests in a server, none by default) -A, --token-auth Enable token-based authentication for all requests (default=off) --token-auth-secret=randomstring Secret to verify HMAC-signed tokens with, to be used with -A -e, --event-handlers Enable event handlers (default=off) [root@ip-172-31-28-115 janus-gateway]#
- attempts to start failed
[root@ip-172-31-28-115 janus-gateway]# LD_LIBRARY_PATH=/usr/lib && /opt/janus/bin/janus Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd Compiled on: Thu May 3 22:03:54 UTC 2018 --------------------------------------------------- Starting Meetecho Janus (WebRTC Gateway) v0.4.0 --------------------------------------------------- Checking command line arguments... Debug/log level is 4 Debug/log timestamps are disabled Debug/log colors are enabled Adding 'vmnet' to the ICE ignore list... Using 172.31.28.115 as local IP... [WARN] Token based authentication disabled Initializing recorder code Initializing ICE stuff (Full mode, ICE-TCP candidates disabled, half-trickle, IPv6 support disabled) TURN REST API backend: (disabled) [WARN] Janus is deployed on a private address (172.31.28.115) but you didn't specify any STUN server! Expect trouble if this is supposed to work over the internet and not just in a LAN... Crypto: OpenSSL pre-1.1.0 [WARN] The libsrtp installation does not support AES-GCM profiles Fingerprint of our certificate: D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38 [WARN] Data Channels support not compiled [WARN] Event handlers support disabled Plugins folder: /opt/janus/lib/janus/plugins Transport plugins folder: /opt/janus/lib/janus/transports [FATAL] [janus.c:main:4209] No Janus API transport is available... enable at least one and restart Janus [root@ip-172-31-28-115 janus-gateway]#
- I added an ldconfig config file
# add lib dir cat << EOF > /etc/ld.so.conf.d/janus.conf /usr/lib /opt/janus/lib/janus/plugins EOF ldconfig
- and now I got a different error
[root@ip-172-31-28-115 janus-gateway]# /opt/janus/bin/janus Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd Compiled on: Thu May 3 22:14:30 UTC 2018 --------------------------------------------------- Starting Meetecho Janus (WebRTC Gateway) v0.4.0 --------------------------------------------------- Checking command line arguments... Debug/log level is 4 Debug/log timestamps are disabled Debug/log colors are enabled Adding 'vmnet' to the ICE ignore list... Using 172.31.28.115 as local IP... [WARN] Token based authentication disabled Initializing recorder code Initializing ICE stuff (Full mode, ICE-TCP candidates disabled, half-trickle, IPv6 support disabled) TURN REST API backend: (disabled) [WARN] Janus is deployed on a private address (172.31.28.115) but you didn't specify any STUN server! Expect trouble if this is supposed to work over the internet and not just in a LAN... Crypto: OpenSSL pre-1.1.0 [WARN] The libsrtp installation does not support AES-GCM profiles Fingerprint of our certificate: D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38 [WARN] Data Channels support not compiled [WARN] Event handlers support disabled Plugins folder: /opt/janus/lib/janus/plugins Loading plugin 'libjanus_echotest.so'... JANUS EchoTest plugin initialized! Loading plugin 'libjanus_recordplay.so'... JANUS Record&Play plugin initialized! Loading plugin 'libjanus_nosip.so'... JANUS NoSIP plugin initialized! Loading plugin 'libjanus_streaming.so'... JANUS Streaming plugin initialized! Loading plugin 'libjanus_videocall.so'... JANUS VideoCall plugin initialized! Loading plugin 'libjanus_videoroom.so'... JANUS VideoRoom plugin initialized! Loading plugin 'libjanus_textroom.so'... JANUS TextRoom plugin initialized! Loading plugin 'libjanus_lua.so'... [ERR] [plugins/janus_lua.c:janus_lua_init:1136] Error loading Lua script /opt/janus/share/janus/lua/echotest.lua: /opt/janus/share/janus/lua/echotest.lua:6: module 'json' not found: no field package.preload['json'] no file './json.lua' no file '/usr/share/lua/5.1/json.lua' no file '/usr/share/lua/5.1/json/init.lua' no file '/usr/lib64/lua/5.1/json.lua' no file '/usr/lib64/lua/5.1/json/init.lua' no file '/opt/janus/share/janus/lua/json.lua' no file './json.so' no file '/usr/lib64/lua/5.1/json.so' no file '/usr/lib64/lua/5.1/loadall.so' [WARN] The 'janus.plugin.lua' plugin could not be initialized Transport plugins folder: /opt/janus/lib/janus/transports Loading transport plugin 'libjanus_pfunix.so'... [WARN] Unix Sockets server disabled (Janus API) [WARN] Unix Sockets server disabled (Admin API) [WARN] No Unix Sockets server started, giving up... [WARN] The 'janus.transport.pfunix' plugin could not be initialized [FATAL] [janus.c:main:4209] No Janus API transport is available... enable at least one and restart Janus [root@ip-172-31-28-115 janus-gateway]#
- changed the janus.transport.pfunix.cfg config file to enable it
[root@ip-172-31-28-115 janus]# cd /opt/janus/etc/janus [root@ip-172-31-28-115 janus]# cp janus.transport.pfunix.cfg janus.transport.pfunix.cfg.orig [root@ip-172-31-28-115 janus]# vim janus.transport.pfunix.cfg [root@ip-172-31-28-115 janus]# diff janus.transport.pfunix.cfg janus.transport.pfunix.cfg.orig 6c6 < enabled = yes ; Whether to enable the Unix Sockets interface --- > enabled = no ; Whether to enable the Unix Sockets interface [root@ip-172-31-28-115 janus]#
- that changed the error to complaining that no path was configured..
[root@ip-172-31-28-115 janus-gateway]# /opt/janus/bin/janus ... Loading transport plugin 'libjanus_pfunix.so'... [WARN] No path configured, skipping Unix Sockets server (Janus API) [WARN] Unix Sockets server disabled (Admin API) [WARN] No Unix Sockets server started, giving up... [WARN] The 'janus.transport.pfunix' plugin could not be initialized [FATAL] [janus.c:main:4209] No Janus API transport is available... enable at least one and restart Janus [root@ip-172-31-28-115 janus]#
- set the path in the same config file
[root@ip-172-31-28-115 janus]# cd /opt/janus/etc/janus [root@ip-172-31-28-115 janus]# cp janus.transport.pfunix.cfg janus.transport.pfunix.cfg.orig [root@ip-172-31-28-115 janus]# vim janus.transport.pfunix.cfg [root@ip-172-31-28-115 janus]# diff janus.transport.pfunix.cfg janus.transport.pfunix.cfg.orig 6c6 < enabled = yes ; Whether to enable the Unix Sockets interface --- > enabled = no ; Whether to enable the Unix Sockets interface 11d10 < path = /opt/janus/lib/janus/ux-janusapi.sock [root@ip-172-31-28-115 janus]#
- created an entry for jangouts.opensourceecology.org pointing to the ec2 dev instance
- I created a vhost in nginx and added the files in the html dir of the janus-gateway github into the new vhost's docroot https://github.com/meetecho/janus-gateway/tree/master/html
- without any further configuration, I got a network error "Probably a network error, is the gateway down?: [object Object]"
- this is all client-side code. so for the echo test demo, we'd configure the javascript file = echotest.js
- that's not going to be able to reach the janus api running on a unix socket; I probably need to make that public somehow, probably by binding nginx to the socket?
- this is described here https://janus.conf.meetecho.com/docs/rest.html#plainhttp
- after reading through the docs, there appears to be a lot of pluggable transports for the janus api. lots of things suggest that the http rest api is the default, but apparently that didn't get installed on my system. Digging into the code suggests that this http transport depends on libmicrohttpd https://github.com/meetecho/janus-gateway/blob/master/transports/janus_http.c
- I installed libmicrohttpd & reinstalled janus
yum install -y libmicrohttpd sh autogen.sh ./configure --prefix=/opt/janus make make install make configs
- that didn't work, and the configure message showed it
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus ... config.status: executing libtool commands libsrtp version: 2.x SSL/crypto library: OpenSSL DTLS set-timeout: not available DataChannels support: no Recordings post-processor: no TURN REST API client: yes Doxygen documentation: no Transports: REST (HTTP/HTTPS): no WebSockets: no RabbitMQ: no MQTT: no Unix Sockets: yes Plugins: Echo Test: yes Streaming: yes Video Call: yes SIP Gateway (Sofia): no SIP Gateway (libre): no NoSIP (RTP Bridge): yes Audio Bridge: no Video Room: yes Voice Mail: no Record&Play: yes Text Room: yes Lua Interpreter: yes Event handlers: Sample event handler: yes RabbitMQ event handler:no JavaScript modules: no If this configuration is ok for you, do a 'make' to start building Janus. A 'make install' will install Janus and its plugins to the specified prefix. Finally, a 'make configs' will install some sample configuration files too (something you'll only want to do the first time, though). [root@ip-172-31-28-115 janus-gateway]# ...
- under the "Transports" section, only "Unix Sockets" is "yes'. We probably need "REST (HTTP/HTTPS)" to be "yes" too
- I dug through the 'configure' file & discovered the option is called '--enable-rest', but executing that says that libmicrohttpd was not found, even though it's there!
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus --enable-rest ... checking for TRANSPORTS... yes checking for MHD... no configure: error: libmicrohttpd not found. See README.md for installation instructions or use --disable-rest [root@ip-172-31-28-115 janus-gateway]# rpm -qa | grep -i libmicrohttpd libmicrohttpd-0.9.33-2.el7.x86_64 [root@ip-172-31-28-115 janus-gateway]#
- I fixed this by installing the libmicrohttpd-devel package
[root@ip-172-31-28-115 htdocs]# yum --enablerepo=* -y search libmicrohttpd Loaded plugins: amazon-id, rhui-lb, search-disabled-repos N/S matched: libmicrohttpd libmicrohttpd-debuginfo.i686 : Debug information for package libmicrohttpd libmicrohttpd-debuginfo.x86_64 : Debug information for package libmicrohttpd libmicrohttpd-devel.i686 : Development files for libmicrohttpd libmicrohttpd-devel.x86_64 : Development files for libmicrohttpd libmicrohttpd-doc.noarch : Documentation for libmicrohttpd libmicrohttpd.i686 : Lightweight library for embedding a webserver in applications libmicrohttpd.x86_64 : Lightweight library for embedding a webserver in applications Name and summary matches only, use "search all" for everything. [root@ip-172-31-28-115 htdocs]# yum --enablerepo=* -y install libmicrohttpd-devel Loaded plugins: amazon-id, rhui-lb, search-disabled-repos Resolving Dependencies --> Running transaction check ---> Package libmicrohttpd-devel.x86_64 0:0.9.33-2.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================================================== Installing: libmicrohttpd-devel x86_64 0.9.33-2.el7 rhui-REGION-rhel-server-optional 28 k Transaction Summary ====================================================================================================================================================================== Install 1 Package Total download size: 28 k Installed size: 74 k Downloading packages: libmicrohttpd-devel-0.9.33-2.el7.x86_64.rpm | 28 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : libmicrohttpd-devel-0.9.33-2.el7.x86_64 1/1 Verifying : libmicrohttpd-devel-0.9.33-2.el7.x86_64 1/1 Installed: libmicrohttpd-devel.x86_64 0:0.9.33-2.el7 Complete! [root@ip-172-31-28-115 htdocs]#
- and re-configured + installed
[root@ip-172-31-28-115 janus-gateway]# ./configure --prefix=/opt/janus --enable-rest ... checking for EVENTS... yes checking for npm... /bin/npm checking that generated files are newer than configure... done configure: creating ./config.status config.status: creating Makefile config.status: creating html/Makefile config.status: creating docs/Makefile config.status: executing depfiles commands config.status: executing libtool commands libsrtp version: 2.x SSL/crypto library: OpenSSL DTLS set-timeout: not available DataChannels support: no Recordings post-processor: no TURN REST API client: yes Doxygen documentation: no Transports: REST (HTTP/HTTPS): yes WebSockets: no RabbitMQ: no MQTT: no Unix Sockets: yes Plugins: Echo Test: yes Streaming: yes Video Call: yes SIP Gateway (Sofia): no SIP Gateway (libre): no NoSIP (RTP Bridge): yes Audio Bridge: no Video Room: yes Voice Mail: no Record&Play: yes Text Room: yes Lua Interpreter: yes Event handlers: Sample event handler: yes RabbitMQ event handler:no JavaScript modules: no If this configuration is ok for you, do a 'make' to start building Janus. A 'make install' will install Janus and its plugins to the specified prefix. Finally, a 'make configs' will install some sample configuration files too (something you'll only want to do the first time, though). [root@ip-172-31-28-115 janus-gateway]# make && make install && make configs ... [root@ip-172-31-28-115 janus-gateway]#
- and this time it started with the rest over http transport instead of the socket
[root@ip-172-31-28-115 janus-gateway]# /opt/janus/bin/janus Janus commit: d8da250294cbdc193252ce059ef281ba0e2ff5bd Compiled on: Fri May 4 00:11:11 UTC 2018 --------------------------------------------------- Starting Meetecho Janus (WebRTC Gateway) v0.4.0 --------------------------------------------------- Checking command line arguments... Debug/log level is 4 Debug/log timestamps are disabled Debug/log colors are enabled Adding 'vmnet' to the ICE ignore list... Using 172.31.28.115 as local IP... [WARN] Token based authentication disabled Initializing recorder code Initializing ICE stuff (Full mode, ICE-TCP candidates disabled, half-trickle, IPv6 support disabled) TURN REST API backend: (disabled) [WARN] Janus is deployed on a private address (172.31.28.115) but you didn't specify any STUN server! Expect trouble if this is supposed to work over the internet and not just in a LAN... Crypto: OpenSSL pre-1.1.0 [WARN] The libsrtp installation does not support AES-GCM profiles Fingerprint of our certificate: D2:B9:31:8F:DF:24:D8:0E:ED:D2:EF:25:9E:AF:6F:B8:34:AE:53:9C:E6:F3:8F:F2:64:15:FA:E8:7F:53:2D:38 [WARN] Data Channels support not compiled [WARN] Event handlers support disabled Plugins folder: /opt/janus/lib/janus/plugins Loading plugin 'libjanus_echotest.so'... JANUS EchoTest plugin initialized! Loading plugin 'libjanus_recordplay.so'... JANUS Record&Play plugin initialized! Loading plugin 'libjanus_nosip.so'... JANUS NoSIP plugin initialized! Loading plugin 'libjanus_streaming.so'... JANUS Streaming plugin initialized! Loading plugin 'libjanus_videocall.so'... JANUS VideoCall plugin initialized! Loading plugin 'libjanus_videoroom.so'... JANUS VideoRoom plugin initialized! Loading plugin 'libjanus_textroom.so'... JANUS TextRoom plugin initialized! Loading plugin 'libjanus_lua.so'... [ERR] [plugins/janus_lua.c:janus_lua_init:1136] Error loading Lua script /opt/janus/share/janus/lua/echotest.lua: /opt/janus/share/janus/lua/echotest.lua:6: module 'json' not found: no field package.preload['json'] no file './json.lua' no file '/usr/share/lua/5.1/json.lua' no file '/usr/share/lua/5.1/json/init.lua' no file '/usr/lib64/lua/5.1/json.lua' no file '/usr/lib64/lua/5.1/json/init.lua' no file '/opt/janus/share/janus/lua/json.lua' no file './json.so' no file '/usr/lib64/lua/5.1/json.so' no file '/usr/lib64/lua/5.1/loadall.so' [WARN] The 'janus.plugin.lua' plugin could not be initialized Transport plugins folder: /opt/janus/lib/janus/transports Loading transport plugin 'libjanus_http.so'... Joining Janus requests handler thread Sessions watchdog started HTTP webserver started (port 8088, /janus path listener)... [WARN] HTTPS webserver disabled [WARN] Admin/monitor HTTP webserver disabled [WARN] Admin/monitor HTTPS webserver disabled JANUS REST (HTTP/HTTPS) transport plugin initialized! Loading transport plugin 'libjanus_pfunix.so'... [WARN] Unix Sockets server disabled (Janus API) [WARN] Unix Sockets server disabled (Admin API) [WARN] No Unix Sockets server started, giving up... [WARN] The 'janus.transport.pfunix' plugin could not be initialized
- I still got an error on the demo page ("Probably a network error, is the gateway down?: [object Object]"), but that's probably an aws security group issue; it does appear to be listening over http, finally
[root@ip-172-31-28-115 htdocs]# ss -plan | grep -i janus u_str ESTAB 0 0 * 900321 * 900320 users:(("janus",pid=25589,fd=13)) u_str ESTAB 0 0 * 900320 * 900321 users:(("janus",pid=25589,fd=12)) udp UNCONN 0 0 *:5002 *:* users:(("janus",pid=25589,fd=5)) udp UNCONN 0 0 *:5004 *:* users:(("janus",pid=25589,fd=6)) tcp LISTEN 0 32 :::8088 :::* users:(("janus",pid=25589,fd=11)) [root@ip-172-31-28-115 htdocs]#
- I created a new security group called 'videoconf-dev' that has inbound ports opened for 22, 443, 80, & 8088. I assigned this sec group to the dev node in ec2 in addition to default
- that helped, now chrome is yelling at me that it doesn't want to initiate a would-be secure webrtc connection over an insecure http line. it makes sense, but I wish it could be overriden for this test
WebRTC error... {"name":"NotSupportedError","message":"Only secure origins are allowed (see: https://goo.gl/Y0ZkNV)."}
- this is supposed to be possible with the aptly named --unsafely-treat-insecure-origin-as-secure flag https://sites.google.com/a/chromium.org/dev/Home/chromium-security/deprecating-powerful-features-on-insecure-origins
- documented the install guide for janus
# install epel yum -y install epel-release # install other depends, per the documentation yum -y install libmicrohttpd-devel jansson-devel libnice-devel openssl-devel libsrtp-devel sofia-sip-devel glib-devel opus-devel libogg-devel libcurl-devel lua-devel pkgconfig gengetopt libtool autoconf automake # install other depends, per my discovery of their necessity yum -y install glibc2-devel yum --enablerepo=* -y install libnice-devel jansson-devel lua-devel # add lib dir cat << EOF > /etc/ld.so.conf.d/janus.conf /usr/lib /opt/janus/lib/janus/plugins EOF ldconfig # get & compile janus gateway mkdir -p $HOME/sandbox pushd $HOME/sandbox git clone https://github.com/meetecho/janus-gateway.git cd janus-gateway sh autogen.sh ./configure --prefix=/opt/janus make make install make configsTue May 01, 2018
- Marcin pointed out that our prod wiki doesn't allow new users to register because the recaptcha api is refusing to generate content with the error "reCAPTCHA V1 IS SHUTDOWN / Direct site owners to g.co/recaptcha/upgrade" http://opensourceecology.org/w/index.php?title=Special:RequestAccount
- Before I make changes to the prod site, I first confirmed that last night's backup completed successfully
hancock% du -sh hetzner1/* 0 hetzner1/20180426-052001 12G hetzner1/20180427-052001 12G hetzner1/20180428-052001 12G hetzner1/20180429-052001 12G hetzner1/20180430-052001 12G hetzner1/20180501-052002 hancock%
- I found our existing recaptcha keys at $wgReCaptchaPublicKey and $wgReCaptchaPrivateKey in hetzner1:/usr/www/users/soemain/w/LocalSettins.php
- I tried changing $wgCaptchaClass from 'ReCaptcha' to 'MathCaptcha', but it immediately caused an error
- it's not obvious which is our current account with our recaptcha credentials, so I just created a case-specific account for this & stored its credentials in keepass = recaptcha@opensourceecology.org
- I created a new key pair for ReCaptcha v2, but simply dropping it in made no changes. I dug deeper, and I found this note in the ConfirmEdit extension wiki page https://www.mediawiki.org/wiki/Extension:ConfirmEdit#ReCaptcha
As noted in the ReCaptcha FAQ, Google does not support the ReCaptcha version 1 anymore, on which this CAPTCHA module depends. You should consider upgrading to version 2 of ReCaptcha (see the ReCaptchaNoCaptcha module). This module will be removed from ConfirmEdit in the near future (see task T142133).
- That page also lists other captcha options: SimpleCaptcha, FancyCaptcha, MathCaptcha, QuestyCaptcha, ReCaptcha, RecaptchaNoCaptcha
- I earlier confirmed that MathCaptcha fails
- I confirmed that SimpleCaptcha works
- I confirmed that Fancy Captcha fails
[Tue May 01 15:39:44.432265 2018] [:error] [pid 10877] [client 127.0.0.1:41864] PHP Fatal error: Class 'FancyCaptcha' not found in /var/www/html/wiki.opensourceecology.org/htdocs/extensions/ConfirmAccount/frontend/specialpages/actions/RequestAccount_body.php on line 248
- I confirmed that QuestyCaptcha fails
[Tue May 01 15:40:46.098136 2018] [:error] [pid 8193] [client 127.0.0.1:41984] PHP Fatal error: Class 'QuestyCaptcha' not found in /var/www/html/wiki.opensourceecology.org/htdocs/extensions/ConfirmAccount/frontend/specialpages/actions/RequestAccount_body.php on line 248
- the above failures make sense, per the documentation
Some of these modules require additional setup work:
MathCaptcha requires both the presence of TeX and, for versions of MediaWiki after 1.17, the Math extension; FancyCaptcha requires running a preliminary setup script in Python; and reCAPTCHA requires obtaining API keys.
- I don't want to break the prod wiki, and this should really be done *after* the migration, so for now I'll just set the prod site to use SimpleCaptcha. After the migration, I'd like to look into using a question (ie: what is the opposite of open source?) + a locally-generated captcha (FancyCaptcha). Personally, I've had recaptcha break on me many times in the past, and I'd rather not depend on an external service for this.
Thr Apr 26, 2018
- Meeting with Marcin
- Wiki validation so far
- 2 outstanding issues
- We have a superfluous link at the top-right = Create Account and Request Account we should delete the "Create Account" link
- Marcin can't login because his old password was <20 characters, and the new wiki has a minimum password requirement for Administrators to have >=20 character passwords
- Test Plan
- marcin will make a first-draft of a goolgle doc with an exhaustive list of things to test on the wiki (50-100-ish items) & send it to me by next week. it should include
- Both simple & complex daily or routine wiki tasks
- OSE-specific workflow tasks on the wiki that have broken in the past per your experiences that pre-date me
- other wiki functions that we discovered were broken in the past few months when validating the staging wiki
- we may migrate the wiki together in-person when I visit FeF for a few days in Mid-May
- Jitsi
- I had some questions about this (see below), but we didn't get to it (we focused mostly on the wiki & backups)
- what are the reqs for September? Same as before?
- thoughts on hipchat?
- maybe rockchat.
- or maybe jangouts powered by janus (C > Node)
- I reset Marcin's password on the wiki to be >=20 characters using 'htdocs/maintenance/changePassword.php'
[root@hetzner2 maintenance]# php changePassword.php --user=Marcin --password='fake567890example890' PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 715 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 674 PHP Notice: Undefined index: HTTP_USER_AGENT in /var/www/html/wiki.opensourceecology.org/LocalSettings.php on line 5 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 715 PHP Notice: Undefined index: SERVER_NAME in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 1507 PHP Notice: Undefined index: SERVER_NAME in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 1507 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693 Password set for Marcin [root@hetzner2 maintenance]#Wed Apr 25, 2018
- atlassian got back to me giving me a free Hipchat Community License
- it's not well advertised, but I managed to find that the limit (as of 2018-01) the limit of participants in a single Stride call are 25 participants https://community.atlassian.com/t5/Stride-questions/How-many-participants-can-join-a-stride-video-call/qaq-p/694918
- while Hipchat has a limit of 20 participants https://confluence.atlassian.com/hipchat/video-chat-with-your-team-838548935.html
- updated our Videoconferencing article to include Rocket Chat and Hipchat
Tue Apr 24, 2018
- I sent an email to Atlassian requesting a community license request for Stride https://www.atlassian.com/software/views/community-license-request
- Atlassian automatically mailed me a service desk ticket for this request https://getsupport.atlassian.com/servicedesk/customer/portal/35/CA-452135
- I found a great article describing what Jitsi gives us as an SFU https://webrtchacks.com/atlassian-sfu-qa/
- and this video comparing mesh vs sfu vs mcu https://webrtcglossary.com/mcu/
- and this great article https://bloggeek.me/how-many-users-webrtc-call[[1]]
- we might actually want to look into open source MCUs instead of an SFU. The downside is more processing on our server, but I think that'd be less of a bottleneck than each participant downloading n streams.
- responded to Marcin's validation follow-up email
- (3) Chrome ERR_BLOCKED_BY_XSS_AUDITOR on Preview
- marcin found another permission denied issue when attempting to view the preview of the FAQ page https://wiki.opensourceecology.org/index.php?title=FAQ&action=submit
- this was a red herring; it's just another mod_security false-positive that's distinct from the chrome ERR_BLOCKED_BY_XSS_AUDITOR issue
- I whitelisted 959071 = sqli, and I confirmed that fixed the issue
- Marcin said he's running chromium Version 64.0.3282.167 (Official Build) Built on Ubuntu , running on Ubuntu 16.04 (64-bit)
- My chromium is Version 57.0.2987.98 Built on 8.7, running on Debian 8.10 (64-bit)
- I went ahead and updated my ose VM to debian 9, but chromium in debian 9 is still Version 57.0.2987.98 Built on 8.7, running on Debian 9.4 (64-bit)
- the preview bug was supposed to be fixed in v58, but maybe the bug was just that it was blocking the page *and* the debugging. In any case, I can't see what's causing chrome to issue the ERR_BLOCKED_BY_XSS_AUDITOR
- I spun up a disposable vm & installed the latest version of google chrome, but that failed
[user@fedora-23-dvm ~]$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm --2018-04-24 13:38:31-- https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm Resolving dl.google.com (dl.google.com)... 172.217.23.174, 2a00:1450:4001:81f::200e Connecting to dl.google.com (dl.google.com)|172.217.23.174|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 52364199 (50M) [application/x-rpm] Saving to: ‘google-chrome-stable_current_x86_64.rpm’ google-chrome-stable_current_x86 100%[=========================================================>] 49.94M 1.19MB/s in 47s 2018-04-24 13:39:24 (1.06 MB/s) - ‘google-chrome-stable_current_x86_64.rpm’ saved [52364199/52364199] [user@fedora-23-dvm ~]$ sudo dnf install google-chrome-stable_current_x86_64.rpm Last metadata expiration check: 0:09:47 ago on Tue Apr 24 13:37:39 2018. Error: nothing provides libssl3.so(NSS_3.28)(64bit) needed by google-chrome-stable-66.0.3359.117-1.x86_64 (try to add '--allowerasing' to command line to replace conflicting packages) [user@fedora-23-dvm ~]$ sudo dnf --allowerasing install google-chrome-stable_current_x86_64.rpm Last metadata expiration check: 0:10:35 ago on Tue Apr 24 13:37:39 2018. Error: nothing provides libssl3.so(NSS_3.28)(64bit) needed by google-chrome-stable-66.0.3359.117-1.x86_64
- fedora does have chromium in its repos; I installed them, but this was even more out-of-date @ Version 54.0.2840.90 Fedora Project (64-bit)
- I found & installed the google-chrome depend (openssl-devel), but the same error occured. I found a fix of installing fedora 27 :| https://stackoverflow.com/questions/48839199/install-google-chrome-in-fedora-23
- giving up on fedora 27, I created a new VM from debian-9 for installing google-chrome https://unix.stackexchange.com/questions/20614/how-do-i-install-the-latest-version-of-chromium-in-debian-squeeze
root@google-chrome:~# vim /etc/apt/apt.conf.d/ 00notify-hook 20auto-upgrades 50appstream 70debconf 01autoremove 20listchanges 50unattended-upgrades 70no-unattended 01autoremove-kernels 20packagekit 60gnome-software root@google-chrome:~# vim /etc/apt/apt.conf.d/^C root@google-chrome:~# wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb --2018-04-24 14:29:16-- https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb Resolving dl.google.com (dl.google.com)... 172.217.23.174, 2a00:1450:4001:81f::200e Connecting to dl.google.com (dl.google.com)|172.217.23.174|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 52231002 (50M) [application/x-debian-package] Saving to: ‘google-chrome-stable_current_amd64.deb’ google-chrome-stable_curre 100%[========================================>] 49.81M 1.19MB/s in 57s 2018-04-24 14:30:24 (895 KB/s) - ‘google-chrome-stable_current_amd64.deb’ saved [52231002/52231002] root@google-chrome:~# dpkg -i google-chrome-stable_current_amd64.deb Selecting previously unselected package google-chrome-stable. (Reading database ... 123078 files and directories currently installed.) Preparing to unpack google-chrome-stable_current_amd64.deb ... Unpacking google-chrome-stable (66.0.3359.117-1) ... dpkg: dependency problems prevent configuration of google-chrome-stable: google-chrome-stable depends on libappindicator3-1; however: Package libappindicator3-1 is not installed. dpkg: error processing package google-chrome-stable (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.7.6.1-2) ... Processing triggers for qubes-core-agent (3.2.28-1+deb9u1) ... Processing triggers for desktop-file-utils (0.23-1) ... Processing triggers for mime-support (3.60) ... Errors were encountered while processing: google-chrome-stable root@google-chrome:~#
- I installed it with fix broken
apt-get --fix-broken install
- now I'm running Google Chrome Version 66.0.3359.117 (Official Build) (64-bit)
- I loaded Marcin's log in this very-new-version of Google Chrome, logged in, clicked to edit the page, and to view the preview. I got the ERR_BLOCKED_BY_XSS_AUDITOR error again, and the console was still blank :(
- As a test, I tried the same thing on our old site, and I got the issue again! That means this is an issue that's not relevant to our migration. If it hasn't been a major blocker before the migration, I think we should just mark this as "won't fix" with solution being "use firefox" for the few times this occurs
Mon Apr 23, 2018
- verified that our ec2 charges are still $0 in the aws console
- I created a dns entry for jitsi.opensourceecology.org on cloudflare pointing to 34.210.153.174 = our ec2 instance's public IP
- continuing with the jitsi manual install on our free ec2 t2.micro dev instance, I went to install jitsi meet; here's the easy part
# download jitsi meet pushd /var/www/html/jitsi.opensourceecology.org git clone https://github.com/jitsi/jitsi-meet.git mv htdocs htdocs.`date "+%Y%m%d_%H%M%S"`.old mv "jitsi-meet" "htdocs" pushd htdocs
- but all the npm stuff fails, as Christian found. There's some mentions of this in the github, suggesting that some have made it work https://github.com/jitsi/jitsi-meet/search?o=desc&q=centos&s=created&type=Issues
[root@ip-172-31-28-115 htdocs]# yum install -y npm nodejs ... [root@ip-172-31-28-115 htdocs]# npm install npm WARN deprecated nomnom@1.6.2: Package no longer supported. Contact support@npmjs.com for more info. npm WARN deprecated connect@2.30.2: connect 2.x series is deprecated Killed ..] / preinstall:url-polyfill: sill doParallel preinstall 855 [root@ip-172-31-28-115 htdocs]# make ./node_modules/.bin/webpack -p module.js:478 throw err; ^ Error: Cannot find module 'babel-preset-react' at Function.Module._resolveFilename (module.js:476:15) at Function.resolve (internal/module.js:27:19) at Object.<anonymous> (/var/www/html/jitsi.opensourceecology.org/htdocs/webpack.config.js:80:29) at Module._compile (module.js:577:32) at Object.Module._extensions..js (module.js:586:10) at Module.load (module.js:494:32) at tryModuleLoad (module.js:453:12) at Function.Module._load (module.js:445:3) at Module.require (module.js:504:17) at require (internal/module.js:20:19) make: *** [compile] Error 1 [root@ip-172-31-28-115 htdocs]#
- the source is line 80 of webpack.config.js, which has a line = "require.resolve('babel-preset-react')"
[root@ip-172-31-28-115 htdocs]# sed -n 70,90p /var/www/html/jitsi.opensourceecology.org/htdocs/webpack.config.js // jitsi-meet. The require.resolve, of course, mandates the use // of the prefix babel-preset- in the preset names. presets: [ [ require.resolve('babel-preset-env'), // Tell babel to avoid compiling imports into CommonJS // so that webpack may do tree shaking. { modules: false } ], require.resolve('babel-preset-react'), require.resolve('babel-preset-stage-1') ] }, test: /\.jsx?$/ }, { // Expose jquery as the globals $ and jQuery because it is expected // to be available in such a form by multiple jitsi-meet // dependencies including lib-jitsi-meet. loader: 'expose-loader?$!expose-loader?jQuery', [root@ip-172-31-28-115 htdocs]#
- these issues were specifically mentioned in this issue https://github.com/jitsi/jitsi-meet/issues/446
- I fixed this by manually installing the packages called-out
npm install babel-preset-react npm install babel-preset-stage-1
- but then the next make had a different issue
[root@ip-172-31-28-115 htdocs]# make ./node_modules/.bin/webpack -p Hash: f38346765ad69b5229dc9fb40aa6056b410b3138 Version: webpack 3.9.1 Child Hash: f38346765ad69b5229dc Time: 20725ms Asset Size Chunks Chunk Names dial_in_info_bundle.min.js 90.5 kB 0 [emitted] dial_in_info_bundle app.bundle.min.js 90.3 kB 1 [emitted] app.bundle dial_in_info_bundle.min.map 735 kB 0 [emitted] dial_in_info_bundle app.bundle.min.map 735 kB 1 [emitted] app.bundle [90] (webpack)/buildin/global.js 509 bytes {0} {1} [built] [327] multi babel-polyfill whatwg-fetch ./app.js 52 bytes {1} [built] [328] multi babel-polyfill whatwg-fetch ./react/features/base/react/prop-types-polyfill.js ./react/features/invite/components/dial-in-info-page 64 bytes {0} [built] + 326 hidden modules ERROR in Entry module not found: Error: Can't resolve 'babel-loader' in '/var/www/html/jitsi.opensourceecology.org/htdocs' ERROR in Entry module not found: Error: Can't resolve 'babel-loader' in '/var/www/html/jitsi.opensourceecology.org/htdocs' ERROR in Entry module not found: Error: Can't resolve 'babel-loader' in '/var/www/html/jitsi.opensourceecology.org/htdocs' ERROR in multi babel-polyfill whatwg-fetch ./app.js Module not found: Error: Can't resolve 'babel-loader' in '/var/www/html/jitsi.opensourceecology.org/htdocs' @ multi babel-polyfill whatwg-fetch ./app.js ERROR in multi babel-polyfill whatwg-fetch ./react/features/base/react/prop-types-polyfill.js ./react/features/invite/components/dial-in-info-page Module not found: Error: Can't resolve 'babel-loader' in '/var/www/html/jitsi.opensourceecology.org/htdocs' @ multi babel-polyfill whatwg-fetch ./react/features/base/react/prop-types-polyfill.js ./react/features/invite/components/dial-in-info-page ERROR in multi babel-polyfill whatwg-fetch ./react/features/base/react/prop-types-polyfill.js ./react/features/invite/components/dial-in-info-page Module not found: Error: Can't resolve 'babel-loader' in '/var/www/html/jitsi.opensourceecology.org/htdocs' @ multi babel-polyfill whatwg-fetch ./react/features/base/react/prop-types-polyfill.js ./react/features/invite/components/dial-in-info-page ERROR in multi babel-polyfill whatwg-fetch ./app.js Module not found: Error: Can't resolve 'whatwg-fetch' in '/var/www/html/jitsi.opensourceecology.org/htdocs' @ multi babel-polyfill whatwg-fetch ./app.js ERROR in multi babel-polyfill whatwg-fetch ./react/features/base/react/prop-types-polyfill.js ./react/features/invite/components/dial-in-info-page Module not found: Error: Can't resolve 'whatwg-fetch' in '/var/www/html/jitsi.opensourceecology.org/htdocs' @ multi babel-polyfill whatwg-fetch ./react/features/base/react/prop-types-polyfill.js ./react/features/invite/components/dial-in-info-page Child Hash: 9fb40aa6056b410b3138 Time: 20699ms Asset Size Chunks Chunk Names external_api.min.js 90.5 kB 0 [emitted] external_api external_api.min.map 735 kB 0 [emitted] external_api [90] (webpack)/buildin/global.js 509 bytes {0} [built] [125] multi babel-polyfill ./modules/API/external/index.js 40 bytes {0} [built] + 326 hidden modules ERROR in multi babel-polyfill ./modules/API/external/index.js Module not found: Error: Can't resolve 'babel-loader' in '/var/www/html/jitsi.opensourceecology.org/htdocs' @ multi babel-polyfill ./modules/API/external/index.js make: *** [compile] Error 2 [root@ip-172-31-28-115 htdocs]#
- fixed by installing babel-loader
npm install babel-loader make
- the next failure lacked info..
[root@ip-172-31-28-115 htdocs]# make ./node_modules/.bin/webpack -p make: *** [compile] Killed [root@ip-172-31-28-115 htdocs]# echo $? 2 [root@ip-172-31-28-115 htdocs]#
- I opened up port 80 & 443 on the server's security group in the aws console's ec2 service
- to simplify debugging, I changed the nginx config to use port 80, not 443.
- selinux had to be disabled to prevent "permission denied" errors as popped up in the logs:
==> /var/log/nginx/error.log <== 2018/04/24 02:57:58 [error] 3101#0: *1 open() "/var/www/html/jitsi.opensourceecology.org/htdocs/index.html" failed (13: Permission denied), client: 76.97.223.185, server: jitsi.opensourceecology.org, request: "GET /index.html HTTP/1.1", host: "ec2-34-210-153-174.us-west-2.compute.amazonaws.com" ==> /var/log/nginx/access.log <== 76.97.223.185 - - [24/Apr/2018:02:57:58 +0000] "GET /index.html HTTP/1.1" 403 180 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0" "-"
- I disabled selinux by setting it to Permissive & restarting nginx
[root@ip-172-31-28-115 jitsi.opensourceecology.org]# getenforce Enforcing [root@ip-172-31-28-115 jitsi.opensourceecology.org]# setenforce Permissive [root@ip-172-31-28-115 jitsi.opensourceecology.org]# service nginx restart Redirecting to /bin/systemctl restart nginx.service [root@ip-172-31-28-115 jitsi.opensourceecology.org]#
- ok, this got a page load, but with an error
Uh oh! We couldn't fully download everything we needed :( We will try again shortly. In the mean time, check for problems with your Internet connection! Missing http://ec2-34-210-153-174.us-west-2.compute.amazonaws.com/libs/lib-jitsi-meet.min.js?v=139 show less reload now
- ah, the "Killed" issue is because linux is running out of memory
[root@ip-172-31-28-115 ~]# tail -n 100 /var/log/messages Apr 24 03:00:35 ip-172-31-28-115 dbus[519]: avc: received setenforce notice (enforcing=0) Apr 24 03:00:35 ip-172-31-28-115 dbus-daemon: dbus[519]: avc: received setenforce notice (enforcing=0) Apr 24 03:00:42 ip-172-31-28-115 systemd: Stopping The nginx HTTP and reverse proxy server... Apr 24 03:00:42 ip-172-31-28-115 systemd: Starting The nginx HTTP and reverse proxy server... Apr 24 03:00:42 ip-172-31-28-115 nginx: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok Apr 24 03:00:42 ip-172-31-28-115 nginx: nginx: configuration file /etc/nginx/nginx.conf test is successful Apr 24 03:00:42 ip-172-31-28-115 systemd: Failed to read PID from file /run/nginx.pid: Invalid argument Apr 24 03:00:42 ip-172-31-28-115 systemd: Started The nginx HTTP and reverse proxy server. Apr 24 03:01:01 ip-172-31-28-115 systemd: Started Session 88 of user root. Apr 24 03:01:01 ip-172-31-28-115 systemd: Starting Session 88 of user root. Apr 24 03:07:57 ip-172-31-28-115 kernel: systemd invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0 Apr 24 03:07:57 ip-172-31-28-115 kernel: systemd cpuset=/ mems_allowed=0 Apr 24 03:07:57 ip-172-31-28-115 kernel: CPU: 0 PID: 1 Comm: systemd Not tainted 3.10.0-693.11.6.el7.x86_64 #1 Apr 24 03:07:57 ip-172-31-28-115 kernel: Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006 Apr 24 03:07:57 ip-172-31-28-115 kernel: Call Trace: Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff816a5ea1>] dump_stack+0x19/0x1b Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff816a1296>] dump_header+0x90/0x229 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff812b9dfb>] ? cred_has_capability+0x6b/0x120 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff81188094>] oom_kill_process+0x254/0x3d0 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff812b9fde>] ? selinux_capable+0x2e/0x40 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff811888d6>] out_of_memory+0x4b6/0x4f0 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff816a1d9a>] __alloc_pages_slowpath+0x5d6/0x724 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff8118eaa5>] __alloc_pages_nodemask+0x405/0x420 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff811d6075>] alloc_pages_vma+0xb5/0x200 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff816a924e>] ? __wait_on_bit+0x7e/0x90 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff811b42a0>] handle_mm_fault+0xb60/0xfa0 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff8132f4b3>] ? number.isra.2+0x323/0x360 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff816b37e4>] __do_page_fault+0x154/0x450 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff816b3b15>] do_page_fault+0x35/0x90 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff816af8f8>] page_fault+0x28/0x30 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff81332459>] ? copy_user_enhanced_fast_string+0x9/0x20 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff8122854b>] ? seq_read+0x2ab/0x3b0 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff8120295c>] vfs_read+0x9c/0x170 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff8120381f>] SyS_read+0x7f/0xe0 Apr 24 03:07:57 ip-172-31-28-115 kernel: [<ffffffff816b89fd>] system_call_fastpath+0x16/0x1b Apr 24 03:07:57 ip-172-31-28-115 kernel: Mem-Info: Apr 24 03:07:57 ip-172-31-28-115 kernel: active_anon:214589 inactive_anon:4438 isolated_anon:0#012 active_file:36 inactive_file:783 isolated_file:0#012 unevictable:0 dirty:0 writeback:0 unstable:0#012 slab_reclaimable:4921 slab_unreclaimable:8167#012 mapped:879 shmem:6524 pagetables:2371 bounce:0#012 free:12232 free_pcp:118 free_cma:0 Apr 24 03:07:57 ip-172-31-28-115 kernel: Node 0 DMA free:4588kB min:704kB low:880kB high:1056kB active_anon:10344kB inactive_anon:4kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15988kB managed:15904kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:360kB slab_reclaimable:268kB slab_unreclaimable:476kB kernel_stack:48kB pagetables:140kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Apr 24 03:07:57 ip-172-31-28-115 kernel: lowmem_reserve[]: 0 973 973 973 Apr 24 03:07:57 ip-172-31-28-115 kernel: Node 0 DMA32 free:44340kB min:44348kB low:55432kB high:66520kB active_anon:848012kB inactive_anon:17748kB active_file:144kB inactive_file:3132kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1032192kB managed:998676kB mlocked:0kB dirty:0kB writeback:0kB mapped:3516kB shmem:25736kB slab_reclaimable:19416kB slab_unreclaimable:32192kB kernel_stack:2352kB pagetables:9344kB unstable:0kB bounce:0kB free_pcp:472kB local_pcp:472kB free_cma:0kB writeback_tmp:0kB pages_scanned:711 all_unreclaimable? yes Apr 24 03:07:57 ip-172-31-28-115 kernel: lowmem_reserve[]: 0 0 0 0 Apr 24 03:07:57 ip-172-31-28-115 kernel: Node 0 DMA: 25*4kB (UEM) 19*8kB (UE) 19*16kB (UEM) 12*32kB (UE) 5*64kB (UE) 12*128kB (UEM) 3*256kB (M) 0*512kB 1*1024kB (E) 0*2048kB 0*4096kB = 4588kB Apr 24 03:07:57 ip-172-31-28-115 kernel: Node 0 DMA32: 1281*4kB (UEM) 1040*8kB (UE) 593*16kB (UEM) 251*32kB (UE) 67*64kB (UE) 41*128kB (UEM) 9*256kB (UEM) 3*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 44340kB Apr 24 03:07:57 ip-172-31-28-115 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Apr 24 03:07:57 ip-172-31-28-115 kernel: 7345 total pagecache pages Apr 24 03:07:57 ip-172-31-28-115 kernel: 0 pages in swap cache Apr 24 03:07:57 ip-172-31-28-115 kernel: Swap cache stats: add 0, delete 0, find 0/0 Apr 24 03:07:57 ip-172-31-28-115 kernel: Free swap = 0kB Apr 24 03:07:57 ip-172-31-28-115 kernel: Total swap = 0kB Apr 24 03:07:57 ip-172-31-28-115 kernel: 262045 pages RAM Apr 24 03:07:57 ip-172-31-28-115 kernel: 0 pages HighMem/MovableOnly Apr 24 03:07:57 ip-172-31-28-115 kernel: 8400 pages reserved Apr 24 03:07:57 ip-172-31-28-115 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 321] 0 321 8841 803 24 0 0 systemd-journal Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 519] 81 519 24607 167 18 0 -900 dbus-daemon Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 524] 0 524 135883 1481 84 0 0 NetworkManager Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 531] 0 531 6051 82 16 0 0 systemd-logind Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 562] 0 562 28343 3126 58 0 0 dhclient Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 1031] 0 1031 22386 260 43 0 0 master Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 1033] 89 1033 22429 253 46 0 0 qmgr Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 1139] 0 1139 27511 33 12 0 0 agetty Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 1140] 0 1140 27511 33 10 0 0 agetty Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 8587] 0 8587 11692 394 27 0 -1000 systemd-udevd Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 9607] 0 9607 28198 256 58 0 -1000 sshd Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 9722] 0 9722 13877 111 29 0 -1000 auditd Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 9760] 0 9760 58390 946 48 0 0 rsyslogd Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 9781] 0 9781 31571 159 18 0 0 crond Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 9826] 999 9826 134633 1647 60 0 0 polkitd Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 9903] 0 9903 26991 38 9 0 0 rhnsd Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 9914] 0 9914 143438 3265 99 0 0 tuned Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 9985] 998 9985 25120 91 20 0 0 chronyd Apr 24 03:07:57 ip-172-31-28-115 kernel: [11651] 0 11651 32007 188 17 0 0 screen Apr 24 03:07:57 ip-172-31-28-115 kernel: [11652] 0 11652 28891 135 13 0 0 bash Apr 24 03:07:57 ip-172-31-28-115 kernel: [12264] 1001 12264 1291983 15415 88 0 0 java Apr 24 03:07:57 ip-172-31-28-115 kernel: [12516] 0 12516 47972 155 49 0 0 su Apr 24 03:07:57 ip-172-31-28-115 kernel: [12517] 1001 12517 28859 103 13 0 0 bash Apr 24 03:07:57 ip-172-31-28-115 kernel: [12611] 0 12611 28859 105 13 0 0 bash Apr 24 03:07:57 ip-172-31-28-115 kernel: [27867] 0 27867 39164 336 80 0 0 sshd Apr 24 03:07:57 ip-172-31-28-115 kernel: [27870] 1000 27870 39361 552 78 0 0 sshd Apr 24 03:07:57 ip-172-31-28-115 kernel: [27871] 1000 27871 28859 95 13 0 0 bash Apr 24 03:07:57 ip-172-31-28-115 kernel: [27892] 1000 27892 31908 66 18 0 0 screen Apr 24 03:07:57 ip-172-31-28-115 kernel: [27893] 1000 27893 32113 296 16 0 0 screen Apr 24 03:07:57 ip-172-31-28-115 kernel: [27894] 1000 27894 28859 115 13 0 0 bash Apr 24 03:07:57 ip-172-31-28-115 kernel: [30326] 0 30326 54628 261 63 0 0 sudo Apr 24 03:07:57 ip-172-31-28-115 kernel: [30327] 0 30327 47972 155 49 0 0 su Apr 24 03:07:57 ip-172-31-28-115 kernel: [30328] 0 30328 28892 125 13 0 0 bash Apr 24 03:07:57 ip-172-31-28-115 kernel: [30369] 1000 30369 28859 105 13 0 0 bash Apr 24 03:07:57 ip-172-31-28-115 kernel: [30385] 0 30385 54628 261 61 0 0 sudo Apr 24 03:07:57 ip-172-31-28-115 kernel: [30386] 0 30386 47972 155 49 0 0 su Apr 24 03:07:57 ip-172-31-28-115 kernel: [30387] 0 30387 28892 141 14 0 0 bash Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 3017] 997 3017 24190 5474 48 0 0 lua Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 3164] 0 3164 30201 523 54 0 0 nginx Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 3165] 996 3165 30317 649 56 0 0 nginx Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 3177] 0 3177 30814 57 15 0 0 anacron Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 3180] 89 3180 22428 276 46 0 0 pickup Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 3185] 0 3185 27066 60 9 0 0 make Apr 24 03:07:57 ip-172-31-28-115 kernel: [ 3186] 0 3186 469819 174655 759 0 0 node Apr 24 03:07:57 ip-172-31-28-115 kernel: Out of memory: Kill process 3186 (node) score 670 or sacrifice child Apr 24 03:07:57 ip-172-31-28-115 kernel: Killed process 3186 (node) total-vm:1879276kB, anon-rss:698620kB, file-rss:0kB, shmem-rss:0kB Apr 24 03:10:36 ip-172-31-28-115 su: (to root) ec2-user on pts/5 [root@ip-172-31-28-115 ~]#
- looks like we need a bigger box :\ or swap?
[root@ip-172-31-28-115 ~]# free -m total used free shared buff/cache available Mem: 990 179 689 25 121 652 Swap: 0 0 0 [root@ip-172-31-28-115 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 10G 3.8G 6.3G 38% / devtmpfs 476M 0 476M 0% /dev tmpfs 496M 0 496M 0% /dev/shm tmpfs 496M 26M 470M 6% /run tmpfs 496M 0 496M 0% /sys/fs/cgroup tmpfs 100M 0 100M 0% /run/user/0 tmpfs 100M 0 100M 0% /run/user/1000 [root@ip-172-31-28-115 ~]#
- we have 1G of RAM and 6G of free disk space. I created a 2G swap file & enabled it
[root@ip-172-31-28-115 ~]# dd if=/dev/zero of=/swap1 bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 30.2187 s, 71.1 MB/s [root@ip-172-31-28-115 ~]# mkswap /swap1 Setting up swapspace version 1, size = 2097148 KiB no label, UUID=a401ef50-ccb8-4bca-abeb-9de5a63b107c [root@ip-172-31-28-115 ~]# chmod 0600 /swap1 [root@ip-172-31-28-115 ~]# swapon /swap1 [root@ip-172-31-28-115 ~]# free -m total used free shared buff/cache available Mem: 990 180 69 25 740 608 Swap: 2047 0 2047 [root@ip-172-31-28-115 ~]#
- the next `make` run didn't get killed by oom, but it had a bunch of complaints of missing modules:
./node_modules/.bin/webpack -p Hash: 63ec7d579fb8d7f2444256d53b2e2b409f23ea8a Version: webpack 3.9.1 Child Hash: 63ec7d579fb8d7f24442 Time: 72401ms Asset Size Chunks Chunk Names app.bundle.min.js 317 kB 0 [emitted] [big] app.bundle dial_in_info_bundle.min.js 116 kB 1 [emitted] dial_in_info_bundle device_selection_popup_bundle.min.js 101 kB 2 [emitted] device_selection_popup_bundle do_external_connect.min.js 2 kB 3 [emitted] do_external_connect alwaysontop.min.js 731 bytes 4 [emitted] alwaysontop app.bundle.min.map 4.21 MB 0 [emitted] app.bundle dial_in_info_bundle.min.map 912 kB 1 [emitted] dial_in_info_bundle device_selection_popup_bundle.min.map 2.11 MB 2 [emitted] device_selection_popup_bundle do_external_connect.min.map 19.4 kB 3 [emitted] do_external_connect alwaysontop.min.map 39.2 kB 4 [emitted] alwaysontop [43] ./react/features/base/config/index.js + 4 modules 13.2 kB {0} {1} {2} [built] [70] ./react/features/base/config/parseURLParams.js 1.51 kB {0} {1} {2} {3} [built] [122] ./react/features/base/config/getRoomName.js 761 bytes {0} {1} {2} {3} [built] [176] ./react/features/base/react/prop-types-polyfill.js 227 bytes {0} {1} {2} [built] [444] ./modules/settings/Settings.js 6.12 kB {0} [built] [450] multi babel-polyfill whatwg-fetch ./app.js 52 bytes {0} [built] [451] ./app.js + 2 modules 4.67 kB {0} [built] [465] ./modules/UI/UI.js 35 kB {0} [built] [480] ./react/index.web.js 1.06 kB {0} [built] [481] ./react/features/device-selection/popup.js 476 bytes {2} [built] [482] ./react/features/device-selection/DeviceSelectionPopup.js 12.3 kB {2} [built] [483] ./react/features/always-on-top/index.js + 2 modules 20.3 kB {4} [built] [484] multi babel-polyfill whatwg-fetch ./react/features/base/react/prop-types-polyfill.js ./react/features/invite/components/dial-in-info-page 64 bytes {1} [built] [485] ./react/features/invite/components/dial-in-info-page/index.js + 2 modules 4 kB {1} [built] [486] ./connection_optimization/do_external_connect.js 2.51 kB {3} [built] + 472 hidden modules ERROR in ./react/features/invite/components/AddPeopleDialog.web.js Module not found: Error: Can't resolve '@atlaskit/avatar' in '/var/www/html/jitsi.opensourceecology.org/htdocs/react/features/invite/components' @ ./react/features/invite/components/AddPeopleDialog.web.js 15:0-38 @ ./react/features/invite/components/index.js @ ./react/features/invite/index.js @ ./react/features/toolbox/components/Toolbox.web.js @ ./react/features/toolbox/components/index.js @ ./react/features/toolbox/index.js @ ./conference.js @ ./app.js @ multi babel-polyfill whatwg-fetch ./app.js ERROR in ./react/features/base/dialog/components/StatelessDialog.web.js Module not found: Error: Can't resolve '@atlaskit/button' in '/var/www/html/jitsi.opensourceecology.org/htdocs/react/features/base/dialog/components' @ ./react/features/base/dialog/components/StatelessDialog.web.js 9:0-55 @ ./react/features/base/dialog/components/index.js @ ./react/features/base/dialog/index.js @ ./modules/keyboardshortcut/keyboardshortcut.js @ ./app.js @ multi babel-polyfill whatwg-fetch ./app.js ...
- the log file above was huge; here's a grep for just the missing module bit
[root@ip-172-31-28-115 htdocs]# grep -i 'module not found' make.log | cut -d' ' -f1-11 | sort -u Module not found: Error: Can't resolve '@atlaskit/avatar' Module not found: Error: Can't resolve '@atlaskit/button' Module not found: Error: Can't resolve '@atlaskit/dropdown-menu' Module not found: Error: Can't resolve '@atlaskit/field-text' Module not found: Error: Can't resolve '@atlaskit/field-text-area' Module not found: Error: Can't resolve '@atlaskit/flag' Module not found: Error: Can't resolve '@atlaskit/icon/glyph/chevron-down' Module not found: Error: Can't resolve '@atlaskit/icon/glyph/editor/info' Module not found: Error: Can't resolve '@atlaskit/icon/glyph/error' Module not found: Error: Can't resolve '@atlaskit/icon/glyph/star' Module not found: Error: Can't resolve '@atlaskit/icon/glyph/star-filled' Module not found: Error: Can't resolve '@atlaskit/icon/glyph/warning' Module not found: Error: Can't resolve '@atlaskit/inline-dialog' Module not found: Error: Can't resolve '@atlaskit/inline-message' Module not found: Error: Can't resolve '@atlaskit/layer-manager' Module not found: Error: Can't resolve '@atlaskit/lozenge' Module not found: Error: Can't resolve '@atlaskit/modal-dialog' Module not found: Error: Can't resolve '@atlaskit/multi-select' Module not found: Error: Can't resolve '@atlaskit/spinner' Module not found: Error: Can't resolve '@atlaskit/tabs' Module not found: Error: Can't resolve '@atlaskit/theme' Module not found: Error: Can't resolve '@atlaskit/tooltip' Module not found: Error: Can't resolve 'autosize' Module not found: Error: Can't resolve 'i18next' Module not found: Error: Can't resolve 'i18next-browser-languagedetector' Module not found: Error: Can't resolve 'i18next-xhr-backend' Module not found: Error: Can't resolve 'jitsi-meet-logger' Module not found: Error: Can't resolve 'jquery' Module not found: Error: Can't resolve 'jquery-contextmenu' Module not found: Error: Can't resolve 'jquery-i18next' Module not found: Error: Can't resolve 'jQuery-Impromptu' Module not found: Error: Can't resolve 'js-md5' Module not found: Error: Can't resolve 'jwt-decode' Module not found: Error: Can't resolve 'lodash' Module not found: Error: Can't resolve 'lodash/debounce' Module not found: Error: Can't resolve 'lodash/throttle' Module not found: Error: Can't resolve 'moment' Module not found: Error: Can't resolve 'moment/locale/bg' Module not found: Error: Can't resolve 'moment/locale/de' Module not found: Error: Can't resolve 'moment/locale/eo' Module not found: Error: Can't resolve 'moment/locale/es' Module not found: Error: Can't resolve 'moment/locale/fr' Module not found: Error: Can't resolve 'moment/locale/hy-am' Module not found: Error: Can't resolve 'moment/locale/it' Module not found: Error: Can't resolve 'moment/locale/nb' Module not found: Error: Can't resolve 'moment/locale/pl' Module not found: Error: Can't resolve 'moment/locale/pt' Module not found: Error: Can't resolve 'moment/locale/pt-br' Module not found: Error: Can't resolve 'moment/locale/ru' Module not found: Error: Can't resolve 'moment/locale/sk' Module not found: Error: Can't resolve 'moment/locale/sl' Module not found: Error: Can't resolve 'moment/locale/sv' Module not found: Error: Can't resolve 'moment/locale/tr' Module not found: Error: Can't resolve 'moment/locale/zh-cn' Module not found: Error: Can't resolve 'postis' Module not found: Error: Can't resolve 'prop-types' Module not found: Error: Can't resolve 'react' Module not found: Error: Can't resolve 'react-dom' Module not found: Error: Can't resolve 'react-i18next' Module not found: Error: Can't resolve 'react-redux' Module not found: Error: Can't resolve 'redux' Module not found: Error: Can't resolve 'redux-thunk' Module not found: Error: Can't resolve 'url-polyfill' Module not found: Error: Can't resolve 'whatwg-fetch' [root@ip-172-31-28-115 htdocs]#Fri Apr 20, 2018
- I terminated my free-tier ec2 instance last night so I could ensure that it cost $0 before leaving it running. I checked the bill this morning, and I saw ec2 pop into our bill. The total was $0 for 4 hours of RHEL t2.micro instance-hours under monthly free-tier + $0 for 0.040 GB-Mo of EBS General Purpose SSD provisioned storage under monthly free tier. The only other fee I could imagine us getting is for general Datom
a Transfer. Currently we're at 348.233G (mostly from huge files into glacier). All of that is $0 under free tier so far.
- relaunched the instance with higher confidence that it's totally $0 = i-05a5af8b75bb5a0d9
- connected to the new instance over ssh
user@ose:~$ ssh -p 22 -i .ssh/id_rsa.ose ec2-user@ec2-34-210-153-174.us-west-2.compute.amazonaws.com The authenticity of host 'ec2-34-210-153-174.us-west-2.compute.amazonaws.com (34.210.153.174)' can't be established. ECDSA key fingerprint is 89:74:84:57:64:c4:9a:71:fd:8d:9d:22:59:3c:d2:4d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ec2-34-210-153-174.us-west-2.compute.amazonaws.com,34.210.153.174' (ECDSA) to the list of known hosts. [ec2-user@ip-172-31-28-115 ~]$
- fixed various issues existing jitsi install documentation Jitis
- added nginx to jitsi install documentation
# install it from the repos yum install -y nginx # create config file for jitsi.opensourceecology.org mkdir -p /var/www/html/jitsi.opensourceecology.org/htdocs cat << EOF > /etc/nginx/conf.d/jitsi.opensourceecology.org server_names_hash_bucket_size 64; server { listen 443; # tls configuration that is not covered in this guide # we recommend the use of https://certbot.eff.org/ server_name jitsi.opensourceecology.org; # set the root root /var/www/html/jitsi.opensourceecology.org/htdocs; index index.html; location ~ ^/([a-zA-Z0-9=\?]+)$ { rewrite ^/(.*)$ / break; } location / { ssi on; } # BOSH location /http-bind { proxy_pass http://localhost:5280/http-bind; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; } } EOF
- successfully installed maven from source https://xmodulo.com/how-to-install-maven-on-centos.html
wget http://mirror.metrocast.net/apache/maven/maven-3/3.5.3/binaries/apache-maven-3.5.3-bin.tar.gz tar -xzvf apache-maven-*.tar.gz -C /usr/local pushd /usr/local ln -s apache-maven-* maven
- version of maven was verified as 3.5.3
[root@ip-172-31-28-115 local]# /usr/local/maven/bin/mvn -v Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) Maven home: /usr/local/maven Java version: 1.8.0_171, vendor: Oracle Corporation Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-7.b10.el7.x86_64/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64", family: "unix" [root@ip-172-31-28-115 local]#Thr Apr 19, 2018
- I'm still waiting for Marcin to validate the staging wiki & send me the first draft of our migration-day test plan google doc
- I still don't want to do backups to s3 yet, as I want to migrate the wiki first so we reduce our backup size by ~12G per day once hetzner1 becomes near-0.
- I checked our usage on dreamhost, we're at 132G. Hopefully that flies under the radar until we finish the wiki migration
- After those blocked items, jitsi is probably my most important task
- I started this item by building a local VM running centos7 http://www.gtlib.gatech.edu/pub/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1708.iso
- I looked into software that's built-on jitsi to see if maybe there was a project that bundled what we need into stable builds designed for centos (because everything produced by jitsi is intended for debian) https://jitsi.org/built-on-jitsi/
- I found an Administrator's guide to "Jitis Video Bridge" for the Rocket Chat project that had instructions for enabling Video Bridge because their video chat uses p2p WebRTC, which won't scale beyond 3 users or so. But then there's a seperate section to "set up yur own Jitsi Video Bridge" which indicates that when you enable the video bridge function in Rocket Chat, it just uses jitsi.org's infrastructure. To run the Video Bridge yourself, they simply link you back to the Jitsi source/docs, which are best for Debian https://rocket.chat/docs/administrator-guides/jitsi-video-bridge/
- The Matrix project appears to be designed for 1:1 calls, not conference calls https://matrix.org/docs/guides/faq.html
- Atlassian's not-free Stride platform apparently uses Jitsi, and it states "Stride video conferencing is free for unlimited teammates." Now, idk how far it scales, but I do know that Atlassian does provide some their products to non-profits for free. I emailed Marcin asking for a scanned copy of our income tax exemption letter for proof so I can ask Atlassian if they offer Stride for free to non-profits https://www.stride.com/conferencing#
- found the quick install guide for debian https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md
- found the server install from source for other OSs https://github.com/jitsi/jitsi-meet/blob/master/doc/manual-install.md
- found the mailing list for jitsi users http://lists.jitsi.org/pipermail/users/
- found the mailing list for jitsi devs http://lists.jitsi.org/pipermail/dev/
- the search function of the lists.jitsi.org site sucks; google is better https://www.google.com/search?q="centos"+site%3Alists.jitsi.org&oq="centos"+site%3Alists.jitsi.org
- I only got 1 result regarding the 'speex' dependency, which isn't in the yum repos; the solution was given http://lists.jitsi.org/pipermail/dev/2017-November/035909.html
- I installed my VM as "Server with GUI"
- The Template VM install failed; apparently CentOS isn't supported, and the guide to using a non-supported OS (written for Arch) shows this process is very non-trivial
- I installed the "Server with GUI" on an HVM, but that didn't come up after rebooting after the install
NMI watchdog: BUG: soft lockup - CPU#X stuck for Ys!
- I installed another HVM as minimal, but it had the same issue!
- I checked our aws account, and we should get 750 hours of t2.micro instances per month. That's enough to run it 24/7. This item expires 12 months from our signup date, so it's use it or loose it. I'm going to use it.
- I added my public key to the aws ec2 service from the aws console & named it 'maltfield'
- I changed the default security group to only allow port 22 TCP inbound. outbound is completely open.
- I launched a t2.micro node (i-065bfc806b4f39923) with this security group
- I used the default 10G EBS, but I confirmed that for 12 months we get 30G of EBS storage for free.
- confirmed that I could log in via ssh
user@ose:~$ ssh -p 22 -i .ssh/id_rsa.ose ec2-user@ec2-52-32-28-0.us-west-2.compute.amazonaws.com Enter passphrase for key '.ssh/id_rsa.ose': user@ose:~$ ssh -p 22 -i .ssh/id_rsa.ose ec2-user@ec2-52-32-28-0.us-west-2.compute.amazonaws.com Last login: Fri Apr 20 00:40:02 2018 from c-76-97-223-185.hsd1.ga.comcast.net [ec2-user@ip-172-31-29-174 ~]$ hostname ip-172-31-29-174.us-west-2.compute.internal [ec2-user@ip-172-31-29-174 ~]$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000 link/ether 02:55:43:fa:bf:b2 brd ff:ff:ff:ff:ff:ff inet 172.31.29.174/20 brd 172.31.31.255 scope global dynamic eth0 valid_lft 3015sec preferred_lft 3015sec inet6 fe80::55:43ff:fefa:bfb2/64 scope link valid_lft forever preferred_lft forever [ec2-user@ip-172-31-29-174 ~]$ date Fri Apr 20 00:40:38 UTC 2018 [ec2-user@ip-172-31-29-174 ~]$ pwd /home/ec2-user [ec2-user@ip-172-31-29-174 ~]$
- created a wiki page titled "Jitsi" here's what I have for the install so farl
# become root sudo su - # first, update software yum update # install my prereqs yum install -y vim screen wget unzip # enable epel cat << EOF > /etc/yum.repos.d/epel.repo [epel] name=Extra Packages for Enterprise Linux 7 - \$basearch #baseurl=http://download.fedoraproject.org/pub/epel/7/\$basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=\$basearch failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 [epel-debuginfo] name=Extra Packages for Enterprise Linux 7 - \$basearch - Debug #baseurl=http://download.fedoraproject.org/pub/epel/7/\$basearch/debug metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=\$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1 [epel-source] name=Extra Packages for Enterprise Linux 7 - \$basearch - Source #baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=\$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1 EOF # and epel key cat << EOF > /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1.4.11 (GNU/Linux) mQINBFKuaIQBEAC1UphXwMqCAarPUH/ZsOFslabeTVO2pDk5YnO96f+rgZB7xArB OSeQk7B90iqSJ85/c72OAn4OXYvT63gfCeXpJs5M7emXkPsNQWWSju99lW+AqSNm jYWhmRlLRGl0OO7gIwj776dIXvcMNFlzSPj00N2xAqjMbjlnV2n2abAE5gq6VpqP vFXVyfrVa/ualogDVmf6h2t4Rdpifq8qTHsHFU3xpCz+T6/dGWKGQ42ZQfTaLnDM jToAsmY0AyevkIbX6iZVtzGvanYpPcWW4X0RDPcpqfFNZk643xI4lsZ+Y2Er9Yu5 S/8x0ly+tmmIokaE0wwbdUu740YTZjCesroYWiRg5zuQ2xfKxJoV5E+Eh+tYwGDJ n6HfWhRgnudRRwvuJ45ztYVtKulKw8QQpd2STWrcQQDJaRWmnMooX/PATTjCBExB 9dkz38Druvk7IkHMtsIqlkAOQMdsX1d3Tov6BE2XDjIG0zFxLduJGbVwc/6rIc95 T055j36Ez0HrjxdpTGOOHxRqMK5m9flFbaxxtDnS7w77WqzW7HjFrD0VeTx2vnjj GqchHEQpfDpFOzb8LTFhgYidyRNUflQY35WLOzLNV+pV3eQ3Jg11UFwelSNLqfQf uFRGc+zcwkNjHh5yPvm9odR1BIfqJ6sKGPGbtPNXo7ERMRypWyRz0zi0twARAQAB tChGZWRvcmEgRVBFTCAoNykgPGVwZWxAZmVkb3JhcHJvamVjdC5vcmc+iQI4BBMB AgAiBQJSrmiEAhsPBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRBqL66iNSxk 5cfGD/4spqpsTjtDM7qpytKLHKruZtvuWiqt5RfvT9ww9GUUFMZ4ZZGX4nUXg49q ixDLayWR8ddG/s5kyOi3C0uX/6inzaYyRg+Bh70brqKUK14F1BrrPi29eaKfG+Gu MFtXdBG2a7OtPmw3yuKmq9Epv6B0mP6E5KSdvSRSqJWtGcA6wRS/wDzXJENHp5re 9Ism3CYydpy0GLRA5wo4fPB5uLdUhLEUDvh2KK//fMjja3o0L+SNz8N0aDZyn5Ax CU9RB3EHcTecFgoy5umRj99BZrebR1NO+4gBrivIfdvD4fJNfNBHXwhSH9ACGCNv HnXVjHQF9iHWApKkRIeh8Fr2n5dtfJEF7SEX8GbX7FbsWo29kXMrVgNqHNyDnfAB VoPubgQdtJZJkVZAkaHrMu8AytwT62Q4eNqmJI1aWbZQNI5jWYqc6RKuCK6/F99q thFT9gJO17+yRuL6Uv2/vgzVR1RGdwVLKwlUjGPAjYflpCQwWMAASxiv9uPyYPHc ErSrbRG0wjIfAR3vus1OSOx3xZHZpXFfmQTsDP7zVROLzV98R3JwFAxJ4/xqeON4 vCPFU6OsT3lWQ8w7il5ohY95wmujfr6lk89kEzJdOTzcn7DBbUru33CQMGKZ3Evt RjsC7FDbL017qxS+ZVA/HGkyfiu4cpgV8VUnbql5eAZ+1Ll6Dw== =hdPa -----END PGP PUBLIC KEY BLOCK----- EOF # update again yum update ####### # prosody # # install jitsi prereqs yum install -y prosody # configure prosody mkdir -p /etc/prosody/conf.avail/ cat << EOF > /etc/prosody/conf.avail/jitsi.opensourceecology.org.cfg.lua VirtualHost "jitsi.opensourceecology.org" authentication = "anonymous" ssl = { key = "/var/lib/prosody/jitsi.opensourceecology.org.key"; certificate = "/var/lib/prosody/jitsi.opensourceecology.org.crt"; } modules_enabled = { "bosh"; "pubsub"; } c2s_require_encryption = false VirtualHost "auth.jitsi.opensourceecology.org" ssl = { key = "/var/lib/prosody/auth.jitsi.opensourceecology.org.key"; certificate = "/var/lib/prosody/auth.jitsi.opensourceecology.org.crt"; } authentication = "internal_plain" admins = { "focus@auth.jitsi.opensourceecology.org" } Component "conference.jitsi.example.com" "muc" Component "jitsi-videobridge.jitsi.opensourceecology.org" component_secret = "YOURSECRET1" Component "focus.jitsi.opensourceecology.org" component_secret = "YOURSECRET2" EOF ln -s /etc/prosody/conf.avail/jitsi.opensourceecology.org.cfg.lua /etc/prosody/conf.d/jitsi.opensourceecology.org.cfg.lua prosodyctl cert generate jitsi.opensourceecology.org prosodyctl cert generate auth.jitsi.opensourceecology.org mkdir -p /usr/local/share/ca-certificates ln -sf /var/lib/prosody/auth.jitsi.opensourceecology.org.crt /usr/local/share/ca-certificates/auth.jitsi.opensourceecology.org.crt # this binary doesn't exist; TODO: find out if it's necessary? update-ca-certificates -f prosodyctl register focus auth.jitsi.opensourceecology.org YOURSECRET3 ######### # NGINX # ######### TODO ##################### # Jitsi Videobridge # ##################### # install depends yum install -y java-1.8.0-openjdk wget https://download.jitsi.org/jitsi-videobridge/linux/jitsi-videobridge-linux-x64-1053.zip unzip jitsi-videobridge-linux-x64-1053.zip cat << EOF > /home/ec2-user/.sip-communicator org.jitsi.impl.neomedia.transform.srtp.SRTPCryptoContext.checkReplay=false EOF chown ec2-user:ec2-user /home/ec2-user/.sip-communicator ./jvb.sh --host=localhost --domain=jitsi.opensourceecology.org --port=5347 --secret=YOURSECRET1
- terminated the instance; I'll check to see what the budget says in a couple days to make sure today's testing was totally free
Mon Apr 16, 2018
- fixed a false-positive mod_security issue that Catarina hit while trying to update obi. id = 981318, sqli
- updated my log & the current meeting doc
Sun Apr 15, 2018
- Marcin got back to me pointing out a few differences between the staging & prod wikis:
1. Color on second graph of https://wiki.opensourceecology.org/wiki/Marcin_Log are different than original. Not sure if that matters. 2. Spacing is added after every main heading section in ephemeral. For example, https://wiki.opensourceecology.org/wiki/AbeAnd_Log. Or my log. 3. Error upon saving changes to my log. https://wiki.opensourceecology.org/index.php?title=Marcin_Log&action=submit . See screenshot, Point3.png. 4. I tried 3 again. It works. But Preview doesn't work - see screenshot. When I hit show preview, it gives me an error. See 2 screenshots Point4.png. 5. Picture upload doesn't work - see Point5 which I tried to upload from my log.
- my responses below for each
- (1) graph color
- the color appears to change on every refresh; this appears to be how Lex coded it
- (2) header spacing
- the element picker in the firefox debugger shows that there's an element defined as ".mw-body-content h1" with a "margin-top: 1em;" on our new wiki, but not the old wiki. When I check for the same thing on wikipedia, I see they also have a "margin-top: 1em;"
- I found the actual line to be in htdocs/skins/Vector/components/common.less
.mw-body-content { position: relative; line-height: @content-line-height; font-size: @content-font-size; z-index: 0; p { line-height: inherit; margin: 0.5em 0; } h1 { margin-top: 1em; } h2 { font-size: 1.5em; margin-top: 1em; }
- the oldest commit of this file in gerrit (from 2014-08-07) does *not* have this margin, so we should be able to isolate when it was added https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/skins/Vector/+/d28f09df312edbb72b19ac6ac5d124f11007a4ba/components/common.less
- ok, I isolated it to this change on 2015-05-23 https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/skins/Vector/+/35ca341ed4e9acfa00505e216d2416f73c253948%5E%21/#F0
Minor header fixes for Typography Refresh
This fixes top-margin for H1 inside .mw-body-content, and increase font-size for H3 from 1.17em to 1.2em (bumping to 17px default).
Final patch for this bug.
Bug: T66653
- the bug referenced is this one https://phabricator.wikimedia.org/T66653
Change-Id: I1e75bc4fc3e04ca6c9238d4ce116136e9bafacd1
- I recommended to Marcin that we just keep the theme defaults the same as what Wikipedia uses
- (3) Request Entity Too Large
- I confirmed that I got this error when attempting to edit a very long article = my log
- the response I got was an error 413 Request Entity Too Large
Request Entity Too Large The requested resource /index.php does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit.
- at the same time, this popped into my logs
[root@hetzner2 httpd]# tail -f wiki.opensourceecology.org/access_log wiki.opensourceecology.org/error_log error_log ... => wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:18:02.842032 2018] [:error] [pid 32271] [client 127.0.0.1] ModSecurity: Request body no files data length is larger than the configured limit (131072).. Deny with code (413) [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNfGhEne29xOSpIJVtJbQAAAAg"] ==> wiki.opensourceecology.org/access_log <== 127.0.0.1 - - [15/Apr/2018:14:18:02 +0000] "POST /index.php?title=Maltfield_log_2018&action=submit HTTP/1.0" 413 338 "https://wiki.opensourceecology.org/index.php?title=Maltfield_log_2018&action=submit" "Mozilla/5.0 (X11; OpenBSD amd64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36"
- I fixed this by setting the SecRequestBodyNoFilesLimit to 1MB in "/etc/httpd/conf.d/00-wiki.opensourceecology.org.conf"
# disable mod_security with rules as needed # (found by logs in: /var/log/httpd/modsec_audit.log) <IfModule security2_module> SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 960915 200003 981173 981318 981260 950911 973302 973324 973317 981255 958057 958056 973327 950018 950001 958008 973329 # set the (sans file) POST size limit to 1M (default is 128K) SecRequestBodyNoFilesLimit 1000000 </IfModule>
- but now I got a 403 forbidden false-positive generic attack; it saw "wget" in my own log file :\
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:38:12.390804 2018] [:error] [pid 20103] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:[\\\\;\\\\|\\\\`]\\\\W*?\\\\bcc|\\\\b(wget|curl))\\\\b|\\\\/cc(?:[\\\\'\\"\\\\|\\\\;\\\\`\\\\-\\\\s]|$))" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_40_generic_attacks.conf"] [line "25"] [id "950907"] [rev "2"] [msg "System Command Injection"] [data "Matched Data: wget found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0aMy work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.\\x0d\\x0a\\x0d\\x0a=See Also=\\x0d\\x0a# Maltfield_Log\\x0d\\x0a# User:Maltfield\\x0d\\x0a# Special:Contributions/Maltfield\\x0d\\x0a\\x0d\\x0a=Sun Apr 08, 2018=\\x0d\\x0a# I checked again just a..."] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/COMMAND_INJECTION"] [tag "WASCTC/WASC-31"] [tag "OWASP_TOP_10/A1"] [tag "PCI/6.5. [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNj1GTPX8kx9Zp-CaA3zQAAAAM"] ==> wiki.opensourceecology.org/access_log <== 127.0.0.1 - - [15/Apr/2018:14:38:12 +0000] "POST /index.php?title=Maltfield_log_2018&action=submit HTTP/1.0" 403 211 "https://wiki.opensourceecology.org/index.php?title=Maltfield_log_2018&action=submit" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:48.0) Gecko/20100101 Firefox/48.0"
- I whitelisted "950907", but I got another. I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:39:55.629972 2018] [:error] [pid 20239] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "[\\\\n\\\\r](?:content-(type|length)|set-cookie|location):" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_40_generic_attacks.conf"] [line "134"] [id "950910"] [rev "2"] [msg "HTTP Response Splitting Attack"] [data "Matched Data: \\x0acontent-type: found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0amy work log from the year 2018. i intentionally made this verbose to make future admin's work easier when troubleshooting. the more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future ose sysadmin.\\x0d\\x0a\\x0d\\x0a=see also=\\x0d\\x0a# maltfield_log\\x0d\\x0a# user:maltfield\\x0d\\x0a# special:contributions/maltfield\\x0d\\x0a\\x0d\\x0a=sun apr 08, 2018=\\x0d\\x0a# i checked..."] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "9"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNkO8vleiEPKc6tRLX2PgAAAAU"]
- next I got a false-positive from "950005"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:46:24.788412 2018] [:error] [pid 21166] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?:\\\\b(?:\\\\.(?:ht(?:access|passwd|group)|www_?acl)|global\\\\.asa|httpd\\\\.conf|boot\\\\.ini)\\\\b|\\\\/etc\\\\/)" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_40_generic_attacks.conf"] [line "205"] [id "950005"] [rev "3"] [msg "Remote File Access Attempt"] [data "Matched Data: /etc/ found within ARGS:wpTextbox1: test1 my work log from the year 2018. i intentionally made this verbose to make future admins work easier when troubleshooting. the more keywords error messages etc that are listed in this log the more helpful it will be for the future ose sysadmin. =see also= # maltfield_log # user:maltfield # special:contributions/maltfield =sun apr 08 2018= # i checked again just after midnight the retry appears to have worked pretty great. just 2 archives ..."] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "9"] [tag "OWASP_CRS/WEB_ATTACK/FILE_INJECTION"] [tag "WASCTC/WASC-33"] [tag "OWASP_TOP_10 [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNlwEB2obt3oWmzHguGRAAAAAU"]
- next I got a false-positive from "950006"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:48:52.180898 2018] [:error] [pid 21264] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?:\\\\b(?:(?:n(?:et(?:\\\\b\\\\W+?\\\\blocalgroup|\\\\.exe)|(?:map|c)\\\\.exe)|t(?:racer(?:oute|t)|elnet\\\\.exe|clsh8?|ftp)|(?:w(?:guest|sh)|rcmd|ftp)\\\\.exe|echo\\\\b\\\\W*?\\\\by+)\\\\b|c(?:md(?:(?:\\\\.exe|32)\\\\b|\\\\b\\\\W*?\\\\/c)|d(?:\\\\b\\\\W*?[\\\\/]|\\\\W*?\\\\.\\\\.)|hmod.{0,40}?\\\\ ..." at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_40_generic_attacks.conf"] [line "221"] [id "950006"] [rev "3"] [msg "System Command Injection"] [data "Matched Data: `echo found within ARGS:wpTextbox1: test1 my work log from the year 2018. i intentionally made this verbose to make future admins work easier when troubleshooting. the more keywords error messages etc that are listed in this log the more helpful it will be for the future ose sysadmin. =see also= # maltfield_log # user:maltfield # special:contributions/maltfield =sun apr 08 2018= # i checked again just after midnight the retry appears to have worked pretty great. just 2 archives ..."] [severity [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNmVJDQehGtAwf9cZFxVQAAAAE"]
- next I got a false-positive from "959151"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:50:23.449483 2018] [:error] [pid 21758] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "<\\\\?(?!xml)" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_40_generic_attacks.conf"] [line "230"] [id "959151"] [rev "2"] [msg "PHP Injection Attack"] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "9"] [tag "OWASP_CRS/WEB_ATTACK/PHP_INJECTION"] [tag "WASCTC/WASC-15"] [tag "OWASP_TOP_10/A6"] [tag "PCI/6.5.2"] [tag "WASCTC/WASC-25"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE4"] [tag "PCI/6.5.2"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNmr7Npwd4Zn01cn3kYrgAAAAg"]
- next I got a false-positive from "958976"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:51:47.116321 2018] [:error] [pid 21825] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i)(?:\\\\b(?:f(?:tp_(?:nb_)?f?(?:ge|pu)t|get(?:s?s|c)|scanf|write|open|read)|gz(?:(?:encod|writ)e|compress|open|read)|s(?:ession_start|candir)|read(?:(?:gz)?file|dir)|move_uploaded_file|(?:proc_|bz)open|call_user_func)|\\\\$_(?:(?:pos|ge)t|session))\\\\b" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_40_generic_attacks.conf"] [line "233"] [id "958976"] [rev "2"] [msg "PHP Injection Attack"] [data "Matched Data: $_GET found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0aMy work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.\\x0d\\x0a\\x0d\\x0a=See Also=\\x0d\\x0a# Maltfield_Log\\x0d\\x0a# User:Maltfield\\x0d\\x0a# Special:Contributions/Maltfield\\x0d\\x0a\\x0d\\x0a=Sun Apr 08, 2018=\\x0d\\x0a# I checked again just ..."] [severity "CRITICAL [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNnAwxMy3-QXaGk8S8DZAAAAAg"]
- next I got a false-positive from "950007"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:53:13.101976 2018] [:error] [pid 21880] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:\\\\b(?:(?:s(?:ys\\\\.(?:user_(?:(?:t(?:ab(?:_column|le)|rigger)|object|view)s|c(?:onstraints|atalog))|all_tables|tab)|elect\\\\b.{0,40}\\\\b(?:substring|users?|ascii))|m(?:sys(?:(?:queri|ac)e|relationship|column|object)s|ysql\\\\.(db|user))|c(?:onstraint ..." at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "116"] [id "950007"] [rev "2"] [msg "Blind SQL Injection Attack"] [data "Matched Data: substring found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0aMy work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.\\x0d\\x0a\\x0d\\x0a=See Also=\\x0d\\x0a# Maltfield_Log\\x0d\\x0a# User:Maltfield\\x0d\\x0a# Special:Contributions/Maltfield\\x0d\\x0a\\x0d\\x0a=Sun Apr 08, 2018=\\x0d\\x0a# I checked again j..."] [ [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNnWSnNDDq2JC5w2P06-AAAAAk"]
- next I got a false-positive from "959070"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:54:27.832922 2018] [:error] [pid 21954] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "\\\\b(?i:having)\\\\b\\\\s+(\\\\d{1,10}|'[^=]{1,10}')\\\\s*?[=<>]|(?i:\\\\bexecute(\\\\s{1,5}[\\\\w\\\\.$]{1,5}\\\\s{0,3})?\\\\()|\\\\bhaving\\\\b ?(?:\\\\d{1,10}|[\\\\'\\"][^=]{1,10}[\\\\'\\"]) ?[=<>]+|(?i:\\\\bcreate\\\\s+?table.{0,20}?\\\\()|(?i:\\\\blike\\\\W*?char\\\\W*?\\\\()|(?i:(?:(select(.* ..." at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "130"] [id "959070"] [rev "2"] [msg "SQL Injection Attack"] [data "Matched Data: from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.\\x0d\\x0a\\x0d\\x0a=See Also=\\x0d\\x0a# Maltfield_Log\\x0d\\x0a# User:Maltfield\\x0d\\x0a# Special:Contributions/Maltfield\\x0d\\x0a\\x0d\\x0a=Sun Apr 08, 2018=\\x0d\\x0a# I checked again just after midnight; the retry appears to have worked pretty great. Just 2..."] [severi [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNno1tQGkBXlULnvmvpaAAAAAU"]
- next I got a false-positive from "950908"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:55:32.054195 2018] [:error] [pid 22398] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:\\\\b(?:coalesce\\\\b|root\\\\@))" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "140"] [id "950908"] [rev "2"] [msg "SQL Injection Attack."] [data "Matched Data: root@ found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0aMy work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.\\x0d\\x0a\\x0d\\x0a=See Also=\\x0d\\x0a# Maltfield_Log\\x0d\\x0a# User:Maltfield\\x0d\\x0a# Special:Contributions/Maltfield\\x0d\\x0a\\x0d\\x0a=Sun Apr 08, 2018=\\x0d\\x0a# I checked again just ..."] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNn44JlXq26uhw4uNUXCAAAAAU"]
- next I got a false-positive from "981250"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:57:24.386324 2018] [:error] [pid 22465] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:(select|;)\\\\s+(?:benchmark|if|sleep)\\\\s*?\\\\(\\\\s*?\\\\(?\\\\s*?\\\\w+))" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "215"] [id "981250"] [msg "Detects SQL benchmark and sleep injection attempts including conditional queries"] [data "Matched Data: ; \\x0d\\x0a\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09\\x09 \\x0d\\x0a\\x09 if (beresp found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0aMy work log from the year 2018. I intentionally made this verbose to make future admin's work easier w..."] [severity "CRITICAL"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNoVNnaOuebwb3@ICNKAwAAAAg"]
- next I got a false-positive from "981241"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:58:25.610212 2018] [:error] [pid 22500] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:[\\\\s()]case\\\\s*?\\\\()|(?:\\\\)\\\\s*?like\\\\s*?\\\\()|(?:having\\\\s*?[^\\\\s]+\\\\s*?[^\\\\w\\\\s])|(?:if\\\\s?\\\\([\\\\d\\\\w]\\\\s*?[=<>~]))" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "217"] [id "981241"] [msg "Detects conditional SQL injection attempts"] [data "Matched Data: having 'fundraiser' found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0aMy work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.\\x0d\\x0a\\x0d\\x0a=See Also=\\x0d\\x0a# Maltfield_Log\\x0d\\x0a# User:Maltfield\\x0d\\x0a# Special:Contributions/Maltfield\\x0d\\x0a\\x0d\\x0a=Sun Apr 08, 2018=\\x0d\\x0a# I check..."] [severity "CRITICAL"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNokaA9VhPOcZIEe5Ri-AAAAAU"]
- next I got a false-positive from "981252"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 14:59:16.861059 2018] [:error] [pid 22538] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:alter\\\\s*?\\\\w+.*?character\\\\s+set\\\\s+\\\\w+)|([\\"'`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98];\\\\s*?waitfor\\\\s+time\\\\s+[\\"'`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98])|(?:[\\"'`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98];.*?:\\\\s*?goto))" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "219"] [id "981252"] [msg "Detects MySQL charset switch and MSSQL DoS attempts"] [data "Matched Data: alternative. Other supported op code caches are: mmTurck, WinCache, XCache.\\x0d\\x0a\\x0d\\x0aOpcode caches store the compiled output of PHP scripts, greatly reducing the amount of time needed to run a script multiple times. MediaWiki does not need to be configured to do PHP bytecode caching and will \\x22just work\\x22 once installed and enabled them. \\x0d\\x0a</blockquote>\\x0d\\x0a## but we can't use OPcache for the mediawiki caching (ie: message caching) since it is only a opcode cache, not an ..."] [severity "CRITICAL"] [tag "OWA [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNoxMBiBJrTpPno-1d0JgAAAAU"]
- next I got a false-positive from "981256"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:00:06.400039 2018] [:error] [pid 22593] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:merge.*?using\\\\s*?\\\\()|(execute\\\\s*?immediate\\\\s*?[\\"'`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98])|(?:\\\\W+\\\\d*?\\\\s*?having\\\\s*?[^\\\\s\\\\-])|(?:match\\\\s*?[\\\\w(),+-]+\\\\s*?against\\\\s*?\\\\())" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "221"] [id "981256"] [msg "Detects MATCH AGAINST, MERGE, EXECUTE IMMEDIATE and HAVING injections"] [data "Matched Data: having 7 found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0aMy work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.\\x0d\\x0a\\x0d\\x0a=See Also=\\x0d\\x0a# Maltfield_Log\\x0d\\x0a# User:Maltfield\\x0d\\x0a# Special:Contributions/Maltfield\\x0d\\x0a\\x0d\\x0a=Sun Apr 08, 2018=\\x0d\\x0a# I checked again j..."] [severity "CRITICAL"] [tag "OWASP_CRS/WEB_ [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNo9sgDr4oS7e@QketkjwAAAAU"]
- next I got a false-positive from "981249"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:01:37.432935 2018] [:error] [pid 23080] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:[\\"'`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98]\\\\s+and\\\\s*?=\\\\W)|(?:\\\\(\\\\s*?select\\\\s*?\\\\w+\\\\s*?\\\\()|(?:\\\\*\\\\/from)|(?:\\\\+\\\\s*?\\\\d+\\\\s*?\\\\+\\\\s*?@)|(?:\\\\w[\\"'`\\xc2\\xb4\\xe2\\x80\\x99\\xe2\\x80\\x98]\\\\s*?(?:[-+=|@]+\\\\s*?)+[\\\\d(])|(?:coalesce\\\\s*?\\\\(|@@\\\\w+\\\\s*?[ ..." at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "233"] [id "981249"] [msg "Detects chained SQL injection attempts 2/2"] [data "Matched Data: case my gpg/tar manipulations deletes the originals and I don't want to pay to download them from Glacier again!\\x0d\\x0a<pre>\\x0d\\x0a[root@hetzner2 glacier-cli]# mkdir ../orig\\x0d\\x0a[root@hetzner2 glacier-cli]# cp hetzner1_20170901-052001.fileList.txt.bz2.gpg\\x5c:\\x5c this\\x5c is\\x5c a\\x5c metadata\\x5c file\\x5c showing\\x5c the\\x5c file\\x5c and\\x5c dir\\x5c list\\x5c contents\\x5c of\\x5c the\\x5c archive\\x5c of\\x5c the\\x5c same\\x5c prefix\\x5c name ../orig/\\x0d\\x0a[root@hetzner2 glacier-cli]# c. [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNpUcmL3wtkV-YO1ihZDAAAAAU"]
- next I got a false-positive from "981251"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:05:19.598269 2018] [:error] [pid 23148] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:create\\\\s+function\\\\s+\\\\w+\\\\s+returns)|(?:;\\\\s*?(?:select|create|rename|truncate|load|alter|delete|update|insert|desc)\\\\s*?[\\\\[(]?\\\\w{2,}))" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "241"] [id "981251"] [msg "Detects MySQL UDF injection and other data/structure manipulation attempts"] [data "Matched Data: ;\\x0d\\x0aCREATE USER found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0aMy work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.\\x0d\\x0a\\x0d\\x0a=See Also=\\x0d\\x0a# Maltfield_Log\\x0d\\x0a# User:Maltfield\\x0d\\x0a# Special:Contributions/Maltfield\\x0d\\x0a\\x0d\\x0a=Sun Apr 08, 2018=\\x0d\\x0a# I chec..."] [severity "CRITICAL"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNqL8Ru5tp4rXAvCPJSGgAAAAQ"]
- next I got a false-positive from "973336"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:06:20.052955 2018] [:error] [pid 23683] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i)(<script[^>]*>[\\\\s\\\\S]*?<\\\\/script[^>]*>|<script[^>]*>[\\\\s\\\\S]*?<\\\\/script\\\\s\\\\S*[\\\\s\\\\S]|<script[^>]*>[\\\\s\\\\S]*?<\\\\/script[\\\\s]*[\\\\s]|<script[^>]*>[\\\\s\\\\S]*?<\\\\/script|<script[^>]*>[\\\\s\\\\S]*?)" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "14"] [id "973336"] [rev "1"] [msg "XSS Filter - Category 1: Script Tag Vector"] [data "Matched Data: <script>\\x0d\\x0a (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){\\x0d\\x0a (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),\\x0d\\x0a m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)\\x0d\\x0a })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');\\x0d\\x0a\\x0d\\x0a ga('create', 'UA-58526017-1', 'auto');\\x0d\\x0a ga('send', 'pageview');\\x0d\\x0a\\x0d\\x0a</script> found within ARGS..."] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [matu [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNqa27f2r5Ci7rKo19IZgAAAAE"]
- next I got a false-positive from "958006"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:07:37.678507 2018] [:error] [pid 23747] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "<body\\\\b.*?\\\\bbackground\\\\b" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "115"] [id "958006"] [rev "2"] [msg "Cross-site Scripting (XSS) Attack"] [data "Matched Data: <body> <h1>not found</h1> <p\\x22. skipping. all renewal attempts failed. the following certs could not be renewed: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem (failure) ------------------------------------------------------------------------------- processing /etc/letsencrypt/renewal/opensourceecology.org.conf ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- proce..."] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNquZaUhFQqxRu6UgZZrAAAAAE"]
- next I got a false-positive from "958049"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:08:36.257521 2018] [:error] [pid 23804] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "\\\\< ?meta\\\\b" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "169"] [id "958049"] [rev "2"] [msg "Cross-site Scripting (XSS) Attack"] [data "Matched Data: <meta found within ARGS:wpTextbox1: test1 my work log from the year 2018. i intentionally made this verbose to make future admin's work easier when troubleshooting. the more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future ose sysadmin. =see also= # maltfield_log # user:maltfield # special:contributions/maltfield =sun apr 08, 2018= # i checked again just after midnight; the retry appears to have worked pretty great. just 2 arc..."] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNq8w21sJT73GKTS-bkXgAAAAU"]
- next I got a false-positive from "958051"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:09:43.732355 2018] [:error] [pid 23864] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "\\\\< ?script\\\\b" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "211"] [id "958051"] [rev "2"] [msg "Cross-site Scripting (XSS) Attack"] [data "Matched Data: <script found within ARGS:wpTextbox1: test1 my work log from the year 2018. i intentionally made this verbose to make future admin's work easier when troubleshooting. the more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future ose sysadmin. =see also= # maltfield_log # user:maltfield # special:contributions/maltfield =sun apr 08, 2018= # i checked again just after midnight; the retry appears to have worked pretty great. just 2 a..."] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNrN-L3I7mwD9Dr@YNrLAAAAAE"]
- next I got a false-positive from "973305"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:12:17.682638 2018] [:error] [pid 24338] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(asfunction|javascript|vbscript|data|mocha|livescript):" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "351"] [id "973305"] [rev "2"] [msg "XSS Attack Detected"] [data "Matched Data: data: found within ARGS:wpTextbox1: test1myworklogfromtheyear2018.iintentionallymadethisverbosetomakefutureadmin'sworkeasierwhentroubleshooting.themorekeywords,errormessages,etcthatarelistedinthislog,themorehelpfulitwillbeforthefutureosesysadmin.=seealso=#maltfield_log#user:maltfield#special:contributions/maltfield=sunapr08,2018=#icheckedagainjustaftermidnight;theretryappearstohaveworkedprettygreat.just2archivesfailedonthisrun<pre>hancock%datesatapr722:14:46pdt2018hancock%pwd/ho..."] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNr0e4ceIffaD1NC6xG1AAAAAU"]
- next I got a false-positive from "973314"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:13:51.802124 2018] [:error] [pid 24463] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "<!(doctype|entity)" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "464"] [id "973314"] [rev "2"] [msg "XSS Attack Detected"] [data "Matched Data: <!doctype found within ARGS:wpTextbox1: test1\\x0d\\x0a\\x0d\\x0amy work log from the year 2018. i intentionally made this verbose to make future admin's work easier when troubleshooting. the more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future ose sysadmin.\\x0d\\x0a\\x0d\\x0a=see also=\\x0d\\x0a# maltfield_log\\x0d\\x0a# user:maltfield\\x0d\\x0a# special:contributions/maltfield\\x0d\\x0a\\x0d\\x0a=sun apr 08, 2018=\\x0d\\x0a# i checked again j..."] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNsL2emkLMGeDo3MiZaagAAAAU"]
- next I got a false-positive from "973331"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:14:45.386955 2018] [:error] [pid 24511] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:<script.*?>)" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "472"] [id "973331"] [rev "2"] [msg "IE XSS Filters - Attack Detected."] [data "Matched Data: <script> found within ARGS:wpTextbox1: test1 My work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin. =See Also= # Maltfield_Log # User:Maltfield # Special:Contributions/Maltfield =Sun Apr 08, 2018= # I checked again just after midnight; the retry appears to have worked pretty great. Just 2 ..."] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNsZEtxjbRbHLbCcH0csAAAAAU"]
- next I got a false-positive from "973330"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:15:55.146051 2018] [:error] [pid 24972] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:<script.*?[ /+\\\\t]*?((src)|(xlink:href)|(href))[ /+\\\\t]*=)" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "476"] [id "973330"] [rev "2"] [msg "IE XSS Filters - Attack Detected."] [data "Matched Data: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src= found within ARGS:wpTextbox1: test1 My work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin..."] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5. [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNsqmLfl9DexEr1vNbnJwAAAAA"]
- next I got a false-positive from "973348"; I whitelisted it too
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:16:58.419220 2018] [:error] [pid 25067] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:<META[ /+\\\\t].*?charset[ /+\\\\t]*=)" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_xss_attacks.conf"] [line "492"] [id "973348"] [rev "2"] [msg "IE XSS Filters - Attack Detected."] [data "Matched Data: <meta charset= found within ARGS:wpTextbox1: test1 My work log from the year 2018. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin. =See Also= # Maltfield_Log # User:Maltfield # Special:Contributions/Maltfield =Sun Apr 08, 2018= # I checked again just after midnight; the retry appears to have worked pretty great. J..."] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNs5z8-aFiK3LAiiWdNlQAAAAU"]
- finally, the edit went through. Here's what the apache conf file looks like now:
[root@hetzner2 conf.d]# cat 00-wiki.opensourceecology.org.conf ... # disable mod_security with rules as needed # (found by logs in: /var/log/httpd/modsec_audit.log) <IfModule security2_module> SecRuleRemoveById 960015 960024 960904 960015 960017 970901 950109 981172 981231 981245 973338 973306 950901 981317 959072 981257 981243 958030 973300 973304 973335 973333 973316 200004 973347 981319 981240 973301 973344 960335 960020 950120 959073 981244 981248 981253 973334 973332 981242 981246 960915 200003 981173 981318 981260 950911 973302 973324 973317 981255 958057 958056 973327 950018 950001 958008 973329 950907 950910 950005 950006 959151 958976 950007 959070 950908 981250 981241 981252 981256 981249 981251 973336 958006 958049 958051 973305 973314 973331 973330 973348 # set the (sans file) POST size limit to 1M (default is 128K) SecRequestBodyNoFilesLimit 1000000 </IfModule> ... [root@hetzner2 conf.d]#
- I tried to edit Marcin's log, but I got another Forbidden; I whitelisted "981276". Then it worked fine
==> wiki.opensourceecology.org/error_log <== [Sun Apr 15 15:21:28.339555 2018] [:error] [pid 25157] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?i:(?:(union(.*?)select(.*?)from)))" at ARGS:wpTextbox1. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_41_sql_injection_attacks.conf"] [line "225"] [id "981276"] [msg "Looking for basic sql injection. Common attack string for mysql, oracle and others."] [data "Matched Data: Unions in St. Joseph]]. How to Find a Good Local Bank. Seed Eco-Home Utilities. How to Glue PVC and ABS.\\x0d\\x0a\\x0d\\x0a=Tue Sep 12, 2017=\\x0d\\x0aOSE HeroX - The Open Source Microfactory Challenge. Tiny Homes.\\x0d\\x0a=Mon Sep 11, 2017=\\x0d\\x0aComparison of CNC Milling to 3D Printing in Metal. Putin Interviews. Track Construction Set. Unauthorized ACH. \\x0d\\x0a\\x0d\\x0a=Sat Sep 9, 2017=\\x0d\\x0a2\\x22 Universal Axis. [[The Monetary System Visually Explain..."] [severity "CRITICAL"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [hostname "wiki.opensourceecology.org"] [uri "/index.php"] [unique_id "WtNt@GQFLmh587VJM0wsSgAAAAM"]
- I'll ask Marcin to try again
- (3) Chrome ERR_BLOCKED_BY_XSS_AUDITOR on Preview
- I was able to reproduce this issue in Chrome only on Preview of Marcin's log file. A quick google search suggests that it's a bug in Chrome v57 & fixed in v58. I confirmed that I have Chromium v57.0.2987.98.
- I asked Marcin to just use Firefox until the bug in Chromium is fixed
- (4) thumbnail generation error
- so it does appear that the image uploaded https://wiki.opensourceecology.org/images/e/e9/Joehaas.jpg
- but the thumbnail generation is throwing an error
- for some reason this doesn't happen to the image I uploaded https://wiki.opensourceecology.org/wiki/File:NewFile.jpg
Error creating thumbnail: Unable to run external programs, proc_open() is disabled. Error code: 1
- this appears to be because MediaWiki was configured to use image magick for thumbnail generation. I disabled this in LocalSettings.php, and the page refresh showed the thumbnail instead of the error.
# we disable using image magick because we intentionally don't grant php exec() # or proc_open() permissions $wgUseImageMagick = false; #$wgImageMagickConvertCommand = "/usr/bin/convert";
- I confirmed that the thumbnails all exist as files on the system, so at least these thumbnails don't need to be generated at every page load
[root@hetzner2 wiki.opensourceecology.org]# ls -lah htdocs/images/thumb/e/e9/Joehaas.jpg/ total 804K drwxr-xr-x 2 apache apache 4.0K Apr 15 16:11 . drwxrwx--- 36 apache apache 4.0K Apr 15 16:11 .. -rw-r--r-- 1 apache apache 242K Apr 15 16:11 1200px-Joehaas.jpg -rw-r--r-- 1 apache apache 6.2K Apr 15 16:11 120px-Joehaas.jpg -rw-r--r-- 1 apache apache 383K Apr 15 16:11 1600px-Joehaas.jpg -rw-r--r-- 1 apache apache 29K Apr 15 16:11 320px-Joehaas.jpg -rw-r--r-- 1 apache apache 127K Apr 15 16:11 800px-Joehaas.jpg [root@hetzner2 wiki.opensourceecology.org]#
- I sent an email with these findings back to Marcin. I'm still waiting for the test plan.
Thr Apr 12, 2018
- ok, returning to the wiki. last items I changed was to fix the caching to use APCU (CACHE_ACCEL) instead of the db to prevent the cpPosTime cookie from causing varnish to hit-for-pass, which was rendering our varnish cache useless until the change to APCU. Now that's fixed, I need to test that updating a page's content necessarily includes a call to varnish to purge the cache for the given page.
- welll...the wiki site is inaccessible because I moved it out of the docroot to reduce our backup sizes of hetzner2 on dreamhost. The content was super stale anyway, so I'll just follow my guide to do a fresh fork of the site
- I updated the wiki guide to migrating the wiki to use the new tmp dir for the data dumps @ "/usr/home/osemain/noBackup/tmp/" instead of "/usr/home/osemain/tmp/". This prevents the redundant data from being archived int he daily backup.
- The data dump of the wiki took 1 hour to complete on hetzner1.
# DECLARE VARIABLES source /usr/home/osemain/backups/backup.settings stamp=`date +%Y%m%d` backupDir_hetzner1="/usr/home/osemain/noBackup/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" vhostDir_hetzner1='/usr/www/users/osemain/w' dbName_hetzner1='osewiki' dbUser_hetzner1="${mysqlUser_wiki}" dbPass_hetzner1="${mysqlPass_wiki}" # STEP 1: BACKUP DB mkdir -p ${backupDir_hetzner1}/{current,old} pushd ${backupDir_hetzner1}/current/ mv ${backupDir_hetzner1}/current/* ${backupDir_hetzner1}/old/ time nice mysqldump -u"${dbUser_hetzner1}" -p"${dbPass_hetzner1}" --all-databases --single-transaction | bzip2 -c > ${backupDir_hetzner1}/current/${backupFileName_db_hetzner1} # STEP 2: BACKUP FILES time nice tar -czvf ${backupDir_hetzner1}/current/${backupFileName_files_hetzner1} ${vhostDir_hetzner1} ... /usr/www/users/osemain/w/maintenance/testRunner.ora.sql real 60m16.755s user 20m29.404s sys 1m55.104s osemain@dedi978:~/noBackup/tmp/backups_for_migration_to_hetzner2/wiki_20180412/current$
- declared variables for this dump, note the timestamp is hardcoded here for future reference & reuse. I also double-checked that mediawiki 1.30.0 is still the latest stable version.
# DECLARE VARIABLES source /root/backups/backup.settings #stamp=`date +%Y%m%d` stamp="20180412" backupDir_hetzner1="/usr/home/osemain/noBackup/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" dbName_hetzner1='osewiki' dbName_hetzner2='osewiki_db' dbUser_hetzner2="osewiki_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/wiki.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs" newMediawikiSourceUrl='https://releases.wikimedia.org/mediawiki/1.30/mediawiki-1.30.0.tar.gz'
- fixed an issue with the rsync command in [Mediawiki#migrate_site_from_hetzner1_to_hetzner2]
- discovered that the htdocs/.htaccess file doesn't actually exist; hmm
[root@hetzner2 current]# find /var/www/html/wiki.opensourceecology.org/htdocs/ | grep -i htaccess /var/www/html/wiki.opensourceecology.org/htdocs/images/deleted/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/images/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/archives/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/includes/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/includes/composer/ComposerVendorHtaccessCreator.php /var/www/html/wiki.opensourceecology.org/htdocs/languages/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/serialized/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/extensions/Widgets/compiled_templates/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/tests/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/tests/qunit/.htaccess /var/www/html/wiki.opensourceecology.org/htdocs/cache/.htaccess [root@hetzner2 current]# [root@hetzner2 current]# find mediawiki-1.30.0 | grep -i htaccess mediawiki-1.30.0/images/.htaccess mediawiki-1.30.0/maintenance/archives/.htaccess mediawiki-1.30.0/maintenance/.htaccess mediawiki-1.30.0/includes/.htaccess mediawiki-1.30.0/includes/composer/ComposerVendorHtaccessCreator.php mediawiki-1.30.0/languages/.htaccess mediawiki-1.30.0/serialized/.htaccess mediawiki-1.30.0/tests/.htaccess mediawiki-1.30.0/tests/qunit/.htaccess mediawiki-1.30.0/cache/.htaccess [root@hetzner2 current]#
- attempting to run the maintenance/update.php script failed
[root@hetzner2 current]# pushd ${docrootDir_hetzner2}/maintenance /var/www/html/wiki.opensourceecology.org/htdocs/maintenance /var/tmp/backups_for_migration_from_hetzner1/wiki_20180412/current /var/www/html [root@hetzner2 maintenance]# php update.php PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 715 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 674 PHP Notice: Undefined index: HTTP_USER_AGENT in /var/www/html/wiki.opensourceecology.org/LocalSettings.php on line 5 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 715 PHP Notice: Undefined index: SERVER_NAME in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 1507 PHP Notice: Undefined index: SERVER_NAME in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 1507 MediaWiki 1.30.0 Updater Your composer.lock file is up to date with current dependencies! PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693 Set $wgShowExceptionDetails = true; and $wgShowDBErrorBacktrace = true; at the bottom of LocalSettings.php to show detailed debugging information. [root@hetzner2 maintenance]#
- but when I attempt to load the page, I get the following response
<!DOCTYPE html> <html><head><title>Internal error - Open Source Ecology</title><style>body { font-family: sans-serif; margin: 0; padding: 0.5em 2em; }</style></head><body> <div class="errorbox">[Ws@Ikz3LkjSWMQ6sfECfMQAAAAo] 2018-04-12 16:25:55: Fatal exception of type MWException</div> <!-- Set $wgShowExceptionDetails = true; at the bottom of LocalSettings.php to show detailed debugging information. --></body></html>
- oh, duh, I left the dbPass_hetzner2 at "CHANGEME" I re-did the DB commands with this fixed, and then tried again
- but I wanted to first make sure that I deleting the db also deleted the associated users, so before I deleted the db I did a dump of the users & the DBs they have access to
[root@hetzner2 sites-enabled]# mysql -uroot -p${mysqlPass} mysql -sNe "select Host,Db,User from db;" % test % test\\_% 127.0.0.1 cacti_db cacti_user localhost cacti_db cacti_user localhost fef_db fef_user localhost obi2_db obi2_user localhost obi3_db obi3_user localhost obi_db obi_user localhost obi_staging_db obi_staging_user localhost oseforum_db oseforum_user localhost osemain_db osemain_user localhost osemain_s_db osemain_s_user localhost osewiki_db osewiki_user localhost oswh_db oswh_user localhost piwik_obi_db piwik_obi_user localhost seedhome_db seedhome_user [root@hetzner2 sites-enabled]#
- then I dropped the db
[root@hetzner2 current]# time nice mysql -uroot -p${mysqlPass} -sNe "DROP DATABASE IF EXISTS ${dbName_hetzner2};" real 0m0.165s user 0m0.004s sys 0m0.000s [root@hetzner2 current]#
- then I checked again
[root@hetzner2 sites-enabled]# mysql -uroot -p${mysqlPass} mysql -sNe "select Host,Db,User from db;" % test % test\\_% 127.0.0.1 cacti_db cacti_user localhost cacti_db cacti_user localhost fef_db fef_user localhost obi2_db obi2_user localhost obi3_db obi3_user localhost obi_db obi_user localhost obi_staging_db obi_staging_user localhost oseforum_db oseforum_user localhost osemain_db osemain_user localhost osemain_s_db osemain_s_user localhost osewiki_db osewiki_user localhost oswh_db oswh_user localhost piwik_obi_db piwik_obi_user localhost seedhome_db seedhome_user [root@hetzner2 sites-enabled]#
- well that sucks; the user is still there! This concurs with their documentation https://dev.mysql.com/doc/refman/5.7/en/drop-database.html
Important: When a database is dropped, privileges granted specifically for the database are not automatically dropped. They must be dropped manually. See Section 13.7.1.4, “GRANT Syntax”.
- so here's the user:
[root@hetzner2 sites-enabled]# mysql -uroot -p${mysqlPass} mysql -sNe "select Host,Db,User from db where db = 'osewiki_db';" localhost osewiki_db osewiki_user [root@hetzner2 sites-enabled]#
- I had issues dropping the user, but the REVOKE worked.
[root@hetzner2 sites-enabled]# mysql -uroot -p${mysqlPass} mysql -sNe "REVOKE ALL PRIVILEGES ON osewiki_db.* FROM 'osewiki_user'@'localhost'; DROP USER 'osewiki_db'@'localhost'; FLUSH PRIVILEGES;" ERROR 1396 (HY000) at line 1: Operation DROP USER failed for 'osewiki_db'@'localhost' [root@hetzner2 sites-enabled]# mysql -uroot -p${mysqlPass} mysql -sNe "select Host,Db,User from db where db = 'osewiki_db';" [root@hetzner2 sites-enabled]# [root@hetzner2 sites-enabled]# mysql -uroot -p${mysqlPass} mysql -sNe "select Host,Db,User from db;" % test % test\\_% 127.0.0.1 cacti_db cacti_user localhost cacti_db cacti_user localhost fef_db fef_user localhost obi2_db obi2_user localhost obi3_db obi3_user localhost obi_db obi_user localhost obi_staging_db obi_staging_user localhost oseforum_db oseforum_user localhost osemain_db osemain_user localhost osemain_s_db osemain_s_user localhost oswh_db oswh_user localhost piwik_obi_db piwik_obi_user localhost seedhome_db seedhome_user [root@hetzner2 sites-enabled]#
- ok, I created the db & user again.
[root@hetzner2 current]# time nice mysql -uroot -p${mysqlPass} -sNe "CREATE DATABASE ${dbName_hetzner2}; USE ${dbName_hetzner2};" real 0m0.004s user 0m0.000s sys 0m0.003s [root@hetzner2 current]# time nice mysql -uroot -p${mysqlPass} < "db.sql" real 2m18.618s user 0m9.201s sys 0m0.429s [root@hetzner2 current]# time nice mysql -uroot -p${mysqlPass} -sNe "GRANT SELECT, INSERT, UPDATE, DELETE ON ${dbName_hetzner2}.* TO '${dbUser_hetzner2}'@'localhost' IDENTIFIED BY '${dbPass_hetzner2}'; FLUSH PRIVILEGES;" real 0m0.004s user 0m0.002s sys 0m0.001s [root@hetzner2 current]#
- I ran the maintenance/update.php script again; this time it did something
[root@hetzner2 current]# pushd ${docrootDir_hetzner2}/maintenance /var/www/html/wiki.opensourceecology.org/htdocs/maintenance /var/tmp/backups_for_migration_from_hetzner1/wiki_20180412/current /var/www/html [root@hetzner2 maintenance]# php update.php PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 715 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 674 PHP Notice: Undefined index: HTTP_USER_AGENT in /var/www/html/wiki.opensourceecology.org/LocalSettings.php on line 5 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 715 PHP Notice: Undefined index: SERVER_NAME in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 1507 PHP Notice: Undefined index: SERVER_NAME in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 1507 MediaWiki 1.30.0 Updater Your composer.lock file is up to date with current dependencies! PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693 Going to run database updates for osewiki_db-wiki_ Depending on the size of your database this may take a while! Abort with control-c in the next five seconds (skip this countdown with --quick) ... 0 Turning off Content Handler DB fields for this part of upgrade. ...have ipb_id field in ipblocks table. ...have ipb_expiry field in ipblocks table. ...already have interwiki table ...indexes seem up to 20031107 standards. ...have rc_type field in recentchanges table. ...index new_name_timestamp already set on recentchanges table. ...have user_real_name field in user table. ...querycache table already exists. ...objectcache table already exists. ...categorylinks table already exists. ...have pagelinks; skipping old links table updates ...il_from OK ...have rc_ip field in recentchanges table. ...index PRIMARY already set on image table. ...have rc_id field in recentchanges table. ...have rc_patrolled field in recentchanges table. ...logging table already exists. ...have user_token field in user table. ...have wl_notificationtimestamp field in watchlist table. ...watchlist talk page rows already present. ...user table does not contain user_emailauthenticationtimestamp field. ...page table already exists. ...have log_params field in logging table. ...logging table has correct log_title encoding. ...have ar_rev_id field in archive table. ...have page_len field in page table. ...revision table does not contain inverse_timestamp field. ...have rev_text_id field in revision table. ...have rev_deleted field in revision table. ...have img_width field in image table. ...have img_metadata field in image table. ...have user_email_token field in user table. ...have ar_text_id field in archive table. ...page_namespace is already a full int (int(11)). ...ar_namespace is already a full int (int(11)). ...rc_namespace is already a full int (int(11)). ...wl_namespace is already a full int (int(11)). ...qc_namespace is already a full int (int(11)). ...log_namespace is already a full int (int(11)). ...have img_media_type field in image table. ...already have pagelinks table. ...image table does not contain img_type field. ...already have unique user_name index. ...user_groups table exists and is in current format. ...have ss_total_pages field in site_stats table. ...user_newtalk table already exists. ...transcache table already exists. ...have iw_trans field in interwiki table. ...wl_notificationtimestamp is already nullable. ...index times already set on logging table. ...have ipb_range_start field in ipblocks table. ...no page_random rows needed to be set ...have user_registration field in user table. ...templatelinks table already exists ...externallinks table already exists. ...job table already exists. ...have ss_images field in site_stats table. ...langlinks table already exists. ...querycache_info table already exists. ...filearchive table already exists. ...have ipb_anon_only field in ipblocks table. ...index rc_ns_usertext already set on recentchanges table. ...index rc_user_text already set on recentchanges table. ...have user_newpass_time field in user table. ...redirect table already exists. ...querycachetwo table already exists. ...have ipb_enable_autoblock field in ipblocks table. ...index pl_namespace on table pagelinks includes field pl_from. ...index tl_namespace on table templatelinks includes field tl_from. ...index il_to on table imagelinks includes field il_from. ...have rc_old_len field in recentchanges table. ...have user_editcount field in user table. ...page_restrictions table already exists. ...have log_id field in logging table. ...have rev_parent_id field in revision table. ...have pr_id field in page_restrictions table. ...have rev_len field in revision table. ...have rc_deleted field in recentchanges table. ...have log_deleted field in logging table. ...have ar_deleted field in archive table. ...have ipb_deleted field in ipblocks table. ...have fa_deleted field in filearchive table. ...have ar_len field in archive table. ...have ipb_block_email field in ipblocks table. ...index cl_sortkey on table categorylinks includes field cl_from. ...have oi_metadata field in oldimage table. ...index usertext_timestamp already set on archive table. ...index img_usertext_timestamp already set on image table. ...index oi_usertext_timestamp already set on oldimage table. ...have ar_page_id field in archive table. ...have img_sha1 field in image table. ...protected_titles table already exists. ...have ipb_by_text field in ipblocks table. ...page_props table already exists. ...updatelog table already exists. ...category table already exists. ...category table already populated. ...have ar_parent_id field in archive table. ...have user_last_timestamp field in user_newtalk table. ...protected_titles table has correct pt_title encoding. ...have ss_active_users field in site_stats table. ...ss_active_users user count set... ...have ipb_allow_usertalk field in ipblocks table. ...change_tag table already exists. ...tag_summary table already exists. ...valid_tag table already exists. ...user_properties table already exists. ...log_search table already exists. ...have log_user_text field in logging table. ...l10n_cache table already exists. ...index change_tag_rc_tag already set on change_tag table. ...have rd_interwiki field in redirect table. ...transcache tc_time already converted. ...*_mime_minor fields are already long enough. ...iwlinks table already exists. ...index iwl_prefix_title_from already set on iwlinks table. ...have ul_value field in updatelog table. ...have iw_api field in interwiki table. ...iwl_prefix key doesn't exist. ...have cl_collation field in categorylinks table. ...categorylinks up-to-date. ...module_deps table already exists. ...ar_page_revid key doesn't exist. ...index ar_revid already set on archive table. ...ll_lang is up-to-date. ...user_last_timestamp is already nullable. ...index user_email already set on user table. ...up_property in table user_properties already modified by patch patch-up_property.sql. ...uploadstash table already exists. ...user_former_groups table already exists. ...index type_action already set on logging table. ...have rev_sha1 field in revision table. ...batch conversion of user_options: nothing to migrate. done. ...user table does not contain user_options field. ...have ar_sha1 field in archive table. ...index page_redirect_namespace_len already set on page table. ...have us_chunk_inx field in uploadstash table. ...have job_timestamp field in job table. ...index page_user_timestamp already set on revision table. ...have ipb_parent_block_id field in ipblocks table. ...index ipb_parent_block_id already set on ipblocks table. ...category table does not contain cat_hidden field. ...have rev_content_format field in revision table. ...have rev_content_model field in revision table. ...have ar_content_format field in archive table. ...have ar_content_model field in archive table. ...have page_content_model field in page table. Content Handler DB fields should be usable now. ...site_stats table does not contain ss_admins field. ...recentchanges table does not contain rc_moved_to_title field. ...sites table already exists. ...have fa_sha1 field in filearchive table. ...have job_token field in job table. ...have job_attempts field in job table. ...have us_props field in uploadstash table. ...ug_group in table user_groups already modified by patch patch-ug_group-length-increase-255.sql. ...ufg_group in table user_former_groups already modified by patch patch-ufg_group-length-increase-255.sql. ...index pp_propname_page already set on page_props table. ...index img_media_mime already set on image table. ...iwl_prefix_title_from index is already non-UNIQUE. ...index iwl_prefix_from_title already set on iwlinks table. ...have ar_id field in archive table. ...have el_id field in externallinks table. ...have rc_source field in recentchanges table. ...index log_user_text_type_time already set on logging table. ...index log_user_text_time already set on logging table. ...have page_links_updated field in page table. ...have user_password_expires field in user table. ...have pp_sortkey field in page_props table. ...recentchanges table does not contain rc_cur_time field. ...index wl_user_notificationtimestamp already set on watchlist table. ...have page_lang field in page table. ...have pl_from_namespace field in pagelinks table. ...have tl_from_namespace field in templatelinks table. ...have il_from_namespace field in imagelinks table. ...img_major_mime in table image already modified by patch patch-img_major_mime-chemical.sql. ...oi_major_mime in table oldimage already modified by patch patch-oi_major_mime-chemical.sql. ...fa_major_mime in table filearchive already modified by patch patch-fa_major_mime-chemical.sql. Extending edit summary lengths (and setting defaults) ...Set $wgShowExceptionDetails = true; and $wgShowDBErrorBacktrace = true; at the bottom of LocalSettings.php to show detailed debugging information. [root@hetzner2 maintenance]#
- but I'm still getting an error when trying to load it
user@personal:~$ curl -i "https://wiki.opensourceecology.org/" HTTP/1.1 500 Internal Server Error Server: nginx Date: Thu, 12 Apr 2018 17:05:38 GMT Content-Type: text/html; charset=utf-8 Content-Length: 421 Connection: keep-alive X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block X-Varnish: 98392 32786 Age: 86 Via: 1.1 varnish-v4 <!DOCTYPE html> <html><head><title>Internal error - Open Source Ecology</title><style>body { font-family: sans-serif; margin: 0; padding: 0.5em 2em; }</style></head><body> <div class="errorbox">[Ws@RjNRic8mf4rYA2bqP2AAAAAg] 2018-04-12 17:04:12: Fatal exception of type MWException</div> <!-- Set $wgShowExceptionDetails = true; at the bottom of LocalSettings.php to show detailed debugging information. --></body></html> user@personal:~$
- unfortunately 'wiki-error.log' is not showing any content. So I took the error's advice & added $wgShowExceptionDetails, even though this will leak the error to the user :(
- silly, there was still no content sent to 'wiki-error.log', but the curl gave me better info
user@personal:~$ curl -i "https://wiki.opensourceecology.org/" HTTP/1.1 500 Internal Server Error Server: nginx Date: Thu, 12 Apr 2018 17:10:01 GMT Content-Type: text/html; charset=utf-8 Content-Length: 3184 Connection: keep-alive X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block X-Varnish: 196610 65728 Age: 55 Via: 1.1 varnish-v4 <!DOCTYPE html> <html><head><title>Internal error - Open Source Ecology</title><style>body { font-family: sans-serif; margin: 0; padding: 0.5em 2em; }</style></head><body> <p>[Ws@SsmBHAg3J1XRaFStUtgAAAAQ] / MWException from line 108 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/localisation/LCStoreCDB.php: Unable to open CDB file "/var/www/html/wiki.opensourceecology.org/htdocs/../cache/l10n_cache-en.cdb.tmp.956119238" for write.</p><p>Backtrace:</p><p>#0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/localisation/LocalisationCache.php(1013): LCStoreCDB->startWrite(string)<br /> #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/localisation/LocalisationCache.php(459): LocalisationCache->recache(string)<br /> #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/localisation/LocalisationCache.php(376): LocalisationCache->initLanguage(string)<br /> #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/localisation/LocalisationCache.php(291): LocalisationCache->loadSubitem(string, string, string)<br /> #4 /var/www/html/wiki.opensourceecology.org/htdocs/languages/Language.php(2587): LocalisationCache->getSubitem(string, string, string)<br /> #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(933): Language->getMessage(string)<br /> #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(888): MessageCache->getMessageForLang(LanguageEn, string, boolean, array)<br /> #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(829): MessageCache->getMessageFromFallbackChain(LanguageEn, string, boolean)<br /> #8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(1275): MessageCache->get(string, boolean, LanguageEn)<br /> #9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(842): Message->fetchMessage()<br /> #10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(934): Message->toString(string)<br /> #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/title/MalformedTitleException.php(49): Message->text()<br /> #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/title/MediaWikiTitleCodec.php(311): MalformedTitleException->__construct(string, string)<br /> #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Title.php(3526): MediaWikiTitleCodec->splitTitleString(string, integer)<br /> #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Title.php(361): Title->secureAndSplit()<br /> #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(84): Title::newFromURL(NULL)<br /> #16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(140): MediaWiki->parseTitle()<br /> #17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(767): MediaWiki->getTitle()<br /> #18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main()<br /> #19 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run()<br /> #20 {main}</p> </body></html> user@personal:~$
- ok, so this is an issue with the 'cache' dir outside the docroot, which is needed for storing interface messages to files per Aaron Schulz's guide. I actually already created this, but the permissions were wrong! I updated the documentation in the migration guide to include creating this dir & set its permissions correctly
[root@hetzner2 wiki.opensourceecology.org]# ls -lah /var/www/html/wiki.opensourceecology.org/cache total 1.1M d---r-x--- 2 not-apache apache 4.0K Mar 16 23:55 . d---r-x--- 4 not-apache apache 4.0K Apr 12 17:08 .. ----r----- 1 not-apache apache 1.1M Mar 16 23:55 l10n_cache-en.cdb [root@hetzner2 wiki.opensourceecology.org]# [ -d "${vhostDir_hetzner2}/cache" ] || mkdir "${vhostDir_hetzner2}/cache" [root@hetzner2 wiki.opensourceecology.org]# chown -R apache:apache "${vhostDir_hetzner2}/cache" [root@hetzner2 wiki.opensourceecology.org]# find "${vhostDir_hetzner2}/cache" -type f -exec chmod 0660 {} \; [root@hetzner2 wiki.opensourceecology.org]# find "${vhostDir_hetzner2}/cache" -type d -exec chmod 0770 {} \; [root@hetzner2 wiki.opensourceecology.org]# [root@hetzner2 wiki.opensourceecology.org]# [root@hetzner2 wiki.opensourceecology.org]# [root@hetzner2 wiki.opensourceecology.org]# ls -lah /var/www/html/wiki.opensourceecology.org/cache total 1.1M drwxrwx--- 2 apache apache 4.0K Mar 16 23:55 . d---r-x--- 4 not-apache apache 4.0K Apr 12 17:08 .. -rw-rw---- 1 apache apache 1.1M Mar 16 23:55 l10n_cache-en.cdb [root@hetzner2 wiki.opensourceecology.org]#
- I manually purged the varnish cache, reloaded, and it worked!
[root@hetzner2 wiki.opensourceecology.org]# varnishadm 'ban req.url ~ "."' [root@hetzner2 wiki.opensourceecology.org]# user@personal:~$ curl -i "https://wiki.opensourceecology.org/" HTTP/1.1 301 Moved Permanently Server: nginx Date: Thu, 12 Apr 2018 17:28:07 GMT Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive X-Content-Type-Options: nosniff Vary: Accept-Encoding,Cookie Cache-Control: s-maxage=1200, must-revalidate, max-age=0 Last-Modified: Thu, 12 Apr 2018 17:28:07 GMT Location: https://wiki.opensourceecology.org/wiki/Main_Page X-XSS-Protection: 1; mode=block X-Varnish: 625 Age: 0 Via: 1.1 varnish-v4 Strict-Transport-Security: max-age=15552001 Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report" user@personal:~$ curl -i "https://wiki.opensourceecology.org/wiki/Main_Page" ... </body> </html> user@personal:~$
- ...but when I went to login, I got an error:
[Ws@XjG9Z0eot@07Oyosq6gAAAAc] 2018-04-12 17:29:48: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"
- I apparently encountered this in the past, but the issue was that I needed to fix an ini_set that I mangled with a sed & re-run the maintenance scripts [Maltfield_log_2018#Tue_Feb_27.2C_2018]
- I gave the maintenance scripts another 2x taps, cleared the varnish cache, and tried to login again; I got the same error
[Ws@Zy9Ric8mf4rYA2bqQDgAAAAg] cu-04-12 17:39:23: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"
- I noticed that the 'wiki-error.log' file was populating with output from the update.php run, and it looks like the issue was that the db user doesn't have the ALTER permission to the db
[error] [75060eb56a79742c1c46e7a5] [no req] ErrorException from line 1507 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php: PHP Notice: Undefined index: SERVER_NAME #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php(1507): MWExceptionHandler::handleError(integer, string, string, integer, array) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/db/MWLBFactory.php(60): wfHostname() #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/ServiceWiring.php(54): MWLBFactory::applyDefaultConfig(array, GlobalVarConfig, ConfiguredReadOnlyMode) #3 [internal function]: MediaWiki\Services\ServiceContainer->{closure}(MediaWiki\MediaWikiServices) #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/services/ServiceContainer.php(361): call_user_func_array(Closure, array) #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/services/ServiceContainer.php(344): MediaWiki\Services\ServiceContainer->createService(string) #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWikiServices.php(503): MediaWiki\Services\ServiceContainer->getService(string) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Setup.php(664): MediaWiki\MediaWikiServices->getDBLoadBalancerFactory() #8 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/doMaintenance.php(79): require_once(string) #9 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/update.php(249): require_once(string) #10 {main} IP: 127.0.0.1 Start command line script update.php [caches] cluster: APCBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: APCBagOStuff, session: APCBagOStuff [caches] LocalisationCache: using store LCStoreNull [error] [75060eb56a79742c1c46e7a5] [no req] ErrorException from line 1507 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php: PHP Notice: Undefined index: SERVER_NAME #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php(1507): MWExceptionHandler::handleError(integer, string, string, integer, array) #1 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php(565): wfHostname() #2 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/doMaintenance.php(89): Maintenance->setAgentAndTriggers() #3 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/update.php(249): require_once(string) #4 {main} [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": false, "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [error] [75060eb56a79742c1c46e7a5] [no req] ErrorException from line 693 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: PHP Warning: ini_set() has been disabled for security reasons #0 [internal function]: MWExceptionHandler::handleError(integer, string, string, integer, array) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(693): ini_set(string, string) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(129): Wikimedia\Rdbms\Database->installErrorHandler() #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(285): Wikimedia\Rdbms\DatabaseMysqlBase->open(string, string, string, string) #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(102): Wikimedia\Rdbms\Database->__construct(array) #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(415): Wikimedia\Rdbms\DatabaseMysqlBase->__construct(array) #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/loadbalancer/LoadBalancer.php(985): Wikimedia\Rdbms\Database::factory(string, array) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/loadbalancer/LoadBalancer.php(801): Wikimedia\Rdbms\LoadBalancer->reallyOpenConnection(array, boolean) #8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/loadbalancer/LoadBalancer.php(667): Wikimedia\Rdbms\LoadBalancer->openConnection(integer, boolean, integer) #9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php(2858): Wikimedia\Rdbms\LoadBalancer->getConnection(integer, array, boolean) #10 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php(1253): wfGetDB(integer, array, boolean) #11 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/update.php(146): Maintenance->getDB(integer) #12 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/doMaintenance.php(92): UpdateMediaWiki->execute() #13 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/update.php(249): require_once(string) #14 {main} [DBConnection] Connected to database 0 at 'localhost'. [DBQuery] SQL ERROR: ALTER command denied to user 'osewiki_user'@'localhost' for table 'wiki_revision' (localhost) [exception] [75060eb56a79742c1c46e7a5] [no req] Wikimedia\Rdbms\DBQueryError from line 1149 of /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php: A database query error has occurred. Did you forget to run your application's database schema updater after upgrading? Query: ALTER TABLE `wiki_revision` MODIFY rev_comment varbinary(767) NOT NULL default '' Function: Wikimedia\Rdbms\Database::sourceFile( /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/archives/patch-editsummary-length.sql ) Error: 1142 ALTER command denied to user 'osewiki_user'@'localhost' for table 'wiki_revision' (localhost) #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(979): Wikimedia\Rdbms\Database->reportQueryError(string, integer, string, string, boolean) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(3325): Wikimedia\Rdbms\Database->query(string, string) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(3274): Wikimedia\Rdbms\Database->sourceStream(unknown type, NULL, NULL, string, NULL) #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/installer/DatabaseUpdater.php(673): Wikimedia\Rdbms\Database->sourceFile(string) #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/installer/MysqlUpdater.php(1194): DatabaseUpdater->applyPatch(string, boolean, string) #5 [internal function]: MysqlUpdater->doExtendCommentLengths() #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/installer/DatabaseUpdater.php(472): call_user_func_array(array, array) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/installer/DatabaseUpdater.php(436): DatabaseUpdater->runUpdates(array, boolean) #8 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/update.php(204): DatabaseUpdater->doUpdates(array) #9 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/doMaintenance.php(92): UpdateMediaWiki->execute() #10 /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/update.php(249): require_once(string) #11 {main} [DBConnection] Closing connection to database 'localhost'.
- this issue was somewhat anticipated as it was described in the Mediawiki Security docs https://www.mediawiki.org/wiki/Manual:Security#General_MySQL_and_MariaDB_recommendations
- I created a new db "superuser" & granted it all permissions. note that I had to use "osewiki_superusr" instead of "osewiki_superuser" due to string lenth limits of the username *sigh*
[root@hetzner2 maintenance]# dbSuperUser_hetzner2="osewiki_superuser" [root@hetzner2 maintenance]# dbSuperPass_hetzner2="CHANGEME" [root@hetzner2 maintenance]# time nice mysql -uroot -p${mysqlPass} -sNe "GRANT ALL ON ${dbName_hetzner2}.* TO '${dbSuperUser_hetzner2}'@'localhost' IDENTIFIED BY '${dbSuperPass_hetzner2}'; FLUSH PRIVILEGES;" ERROR 1470 (HY000) at line 1: String 'osewiki_superuser' is too long for user name (should be no longer than 16) real 0m0.004s user 0m0.002s sys 0m0.002s [root@hetzner2 maintenance]# [root@hetzner2 maintenance]# dbSuperUser_hetzner2="osewiki_superusr" [root@hetzner2 maintenance]# time nice mysql -uroot -p${mysqlPass} -sNe "GRANT ALL ON ${dbName_hetzner2}.* TO '${dbSuperUser_hetzner2}'@'localhost' IDENTIFIED BY '${dbSuperPass_hetzner2}'; FLUSH PRIVILEGES;" real 0m0.004s user 0m0.000s sys 0m0.003s [root@hetzner2 maintenance]#
- I ran the maintenance script again, giving it a distinct db user's credentials via arguments. It worked.
[root@hetzner2 maintenance]# php update.php --dbuser "${dbSuperUser_hetzner2}" --dbpass "${dbSuperPass_hetzner2}" ... Attempted to insert 685 IP revisions, 685 actually done. Set the local repo temp zone container to be private. Purging caches...done. Done in 26 s. [root@hetzner2 maintenance]#
- I refreshed the login attempt, and I logged-in successfully. I also updated the documentation to include these arguments in the call to execute update.php.
- made an edit to a page, and it appeared to work fine. Then I went to a distinct ephemeral browser, loaded the page, and the edit didn't show.
- I got a command to check for varnish purges; I tested it by triggering a purge from the wordpress page on osemain
[root@hetzner2 ~]# varnishlog | grep -EC20 "ReqMethod\s*PURGE" - Timestamp Process: 1523562967.680578 0.341190 0.000040 - Debug "RES_MODE 4" - RespHeader Connection: close - Timestamp Resp: 1523562967.691852 0.352464 0.011274 - Debug "XXX REF 2" - ReqAcct 339 0 339 393 91587 91980 - End * << Session >> 99573 - Begin sess 0 HTTP/1 - SessOpen 127.0.0.1 57242 127.0.0.1:6081 127.0.0.1 6081 1523562967.339361 12 - Link req 99574 rxreq - SessClose RESP_CLOSE 0.353 - End * << Request >> 329175 - Begin req 329174 rxreq - Timestamp Start: 1523562967.712793 0.000000 0.000000 - Timestamp Req: 1523562967.712793 0.000000 0.000000 - ReqStart 127.0.0.1 57244 - ReqMethod PURGE - ReqURL /.* - ReqProtocol HTTP/1.1 - ReqHeader User-Agent: WordPress/4.9.4; https://www.opensourceecology.org - ReqHeader Accept-Encoding: deflate;q=1.0, compress;q=0.5, gzip;q=0.5 - ReqHeader host: www.opensourceecology.org - ReqHeader X-VC-Purge-Method: regex - ReqHeader X-VC-Purge-Host: www.opensourceecology.org - ReqHeader Connection: Close - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1, 127.0.0.1 - ReqHeader X-VC-My-Purge-Key: JOaSAn72IJzrykJp1pfEWaECvUU8KvxZJnxSue3repId3qV8wJOHexjtuhi9r6Wv4FH9y9eFfiMjXX6hvxRrVOEWr2IaBVZMZ7ToEz8nLFdRyjyMkUGMANd6MHOzxiTJ - ReqHeader X-VC-Purge-Key-Auth: false - VCL_acl MATCH purge "localhost" - Debug "VCL_error(200, Purged /.* www.opensourceecology.org)" - VCL_return synth - ReqUnset Accept-Encoding: deflate;q=1.0, compress;q=0.5, gzip;q=0.5 - ReqHeader Accept-Encoding: gzip - VCL_call HASH
- I just noticed the "X-VC-My-Purge-Key", in my logs, which matches /etc/varnish/secret. Well, I already logged it on the public internet so it's not a secret anymore. Our purges *should* work by ACL limited to IP address, so I went ahead and changed the contents of /etc/varnish/secret to a new, random 128 character string & restarted varnish
- I went to update this purge key in wordpress, but I didn't see it set anywhere
- I updated a minor change to the workshops page & saved it. My browser showed the change, but a distinct/epehermal browser refresh did not show it. I triggered a purge of the page from the wp wui, and I got a log in my grep to pop-up. I refreshed it in the distinct/ephemeral browser, and the change was now visible. That confirms that purging still works despite the change the the purge key. Note that the output below still shows the old purge key. *shrug*
* << Request >> 458860 - Begin req 458859 rxreq - Timestamp Start: 1523563588.895393 0.000000 0.000000 - Timestamp Req: 1523563588.895393 0.000000 0.000000 - ReqStart 127.0.0.1 59502 - ReqMethod PURGE - ReqURL / - ReqProtocol HTTP/1.1 - ReqHeader User-Agent: WordPress/4.9.4; https://www.opensourceecology.org - ReqHeader Accept-Encoding: deflate;q=1.0, compress;q=0.5, gzip;q=0.5 - ReqHeader host: www.opensourceecology.org - ReqHeader X-VC-Purge-Method: default - ReqHeader X-VC-Purge-Host: www.opensourceecology.org - ReqHeader Connection: Close - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1, 127.0.0.1 - ReqHeader X-VC-My-Purge-Key: JOaSAn72IJzrykJp1pfEWaECvUU8KvxZJnxSue3repId3qV8wJOHexjtuhi9r6Wv4FH9y9eFfiMjXX6hvxRrVOEWr2IaBVZMZ7ToEz8nLFdRyjyMkUGMANd6MHOzxiTJ - ReqHeader X-VC-Purge-Key-Auth: false - VCL_acl MATCH purge "localhost" - ReqURL / - Debug "VCL_error(200, Purged / www.opensourceecology.org)" - VCL_return synth - ReqUnset Accept-Encoding: deflate;q=1.0, compress;q=0.5, gzip;q=0.5 - ReqHeader Accept-Encoding: gzip
- I checked my varnish config for the wiki site, and I realized that I started at the vcl_recv() function. But, it does get defined in conf/acl.vcl, which is included by the main vcl file = default.vcl
[root@hetzner2 varnish]# cat default.vcl ################################################################################ # File: default.vcl # Version: 0.1 # Purpose: Main config file for varnish cache. Note that it's intentionally # mostly bare to allow robust vhost-specific logic. Please see this # for more info: # * https://www.getpagespeed.com/server-setup/varnish/varnish-virtual-hosts # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2017-11-12 # Updated: 2017-11-12 ################################################################################ vcl 4.0; ############ # INCLUDES # ############ # import std; include "conf/acl.vcl"; include "lib/purge.vcl"; include "all-vhosts.vcl"; include "catch-all.vcl"; [root@hetzner2 varnish]# cat conf/acl.vcl acl purge { "localhost"; "127.0.0.1"; } [root@hetzner2 varnish]#
- I tested an edit again, but no output came from my grep of the varnishlog for purge requests. hmm.
- I dug down to a tcpdump, here's a positive coming from the purge in the worpdress wui from osemain
[root@hetzner2 varnish]# tcpdump -i lo -nX dst port 6081 ... 22:55:00.501715 IP 127.0.0.1.53826 > 127.0.0.1.6081: Flags [P.], seq 0:1252, ack 1, win 342, options [nop,nop,TS val 3519411272 ecr 3519411272], length 1252 0x0000: 4500 0518 ddd4 4000 4006 5a09 7f00 0001 E.....@.@.Z..... 0x0010: 7f00 0001 d242 17c1 9300 299b 26a2 4143 .....B....).&.AC 0x0020: 8018 0156 030d 0000 0101 080a d1c5 f448 ...V...........H 0x0030: d1c5 f448 4745 5420 2f77 702d 6164 6d69 ...HGET./wp-admi 0x0040: 6e2f 3f70 7572 6765 5f76 6172 6e69 7368 n/?purge_varnish 0x0050: 5f63 6163 6865 3d31 265f 7770 6e6f 6e63 _cache=1&_wpnonc 0x0060: 653d 6661 3862 6565 6264 6566 2048 5454 e=fa8beebdef.HTT 0x0070: 502f 312e 300d 0a58 2d52 6561 6c2d 4950 P/1.0..X-Real-IP 0x0080: 3a20 3736 2e39 372e 3232 332e 3138 350d :.76.97.223.185. 0x0090: 0a58 2d46 6f72 7761 7264 6564 2d46 6f72 .X-Forwarded-For 0x00a0: 3a20 3736 2e39 372e 3232 332e 3138 350d :.76.97.223.185. 0x00b0: 0a58 2d46 6f72 7761 7264 6564 2d50 726f .X-Forwarded-Pro 0x00c0: 746f 3a20 6874 7470 730d 0a58 2d46 6f72 to:.https..X-For 0x00d0: 7761 7264 6564 2d50 6f72 743a 2034 3433 warded-Port:.443 0x00e0: 0d0a 486f 7374 3a20 7777 772e 6f70 656e ..Host:.www.open 0x00f0: 736f 7572 6365 6563 6f6c 6f67 792e 6f72 sourceecology.or 0x0100: 670d 0a43 6f6e 6e65 6374 696f 6e3a 2063 g..Connection:.c 0x0110: 6c6f 7365 0d0a 5573 6572 2d41 6765 6e74 lose..User-Agent ...
- I did a page update in mediawiki while running this tcpdump; I saw the page update come through varnish, but there was no purge.
[root@hetzner2 varnish]# tcpdump -i lo -nX dst port 6081 ... 23:05:17.973341 IP 127.0.0.1.54964 > 127.0.0.1.6081: Flags [P.], seq 0:4047, ack 1, win 342, options [nop,nop,TS val 3520028743 ecr 3520028743], length 4047 0x0000: 4500 1003 96c7 4000 4006 962b 7f00 0001 E.....@.@..+.... 0x0010: 7f00 0001 d6b4 17c1 fba0 f683 3a84 17b3 ............:... 0x0020: 8018 0156 0df8 0000 0101 080a d1cf 6047 ...V..........`G 0x0030: d1cf 6047 504f 5354 202f 696e 6465 782e ..`GPOST./index. 0x0040: 7068 703f 7469 746c 653d 5573 6572 3a4d php?title=User:M 0x0050: 616c 7466 6965 6c64 2661 6374 696f 6e3d altfield&action= 0x0060: 7375 626d 6974 2048 5454 502f 312e 300d submit.HTTP/1.0. 0x0070: 0a58 2d52 6561 6c2d 4950 3a20 3736 2e39 .X-Real-IP:.76.9 0x0080: 372e 3232 332e 3138 350d 0a58 2d46 6f72 7.223.185..X-For 0x0090: 7761 7264 6564 2d46 6f72 3a20 3736 2e39 warded-For:.76.9 0x00a0: 372e 3232 332e 3138 350d 0a58 2d46 6f72 7.223.185..X-For 0x00b0: 7761 7264 6564 2d50 726f 746f 3a20 6874 warded-Proto:.ht 0x00c0: 7470 730d 0a58 2d46 6f72 7761 7264 6564 tps..X-Forwarded 0x00d0: 2d50 6f72 743a 2034 3433 0d0a 486f 7374 -Port:.443..Host 0x00e0: 3a20 7769 6b69 2e6f 7065 6e73 6f75 7263 :.wiki.opensourc 0x00f0: 6565 636f 6c6f 6779 2e6f 7267 0d0a 436f eecology.org..Co ...
- I learned that Mediawiki defaults to sending the purge requests over port 80. I changed that to the default varnish port that we're using = 6081 by setting this line in LocalSettings.php
$wgUseSquid = true; $wgSquidServers = array( '127.0.0.1:6081'); $wgUsePrivateIPs = true;
- then I did a page update in MediaWiki, and confirmed the PURGE came in via `varnishlog`
[root@hetzner2 varnish]# tcpdump -i lo -nX dst port 6081 ... * << Request >> 331532 - Begin req 331530 rxreq - Timestamp Start: 1523574861.741242 0.000000 0.000000 - Timestamp Req: 1523574861.741242 0.000000 0.000000 - ReqStart 127.0.0.1 55936 - ReqMethod PURGE - ReqURL /index.php?title=User:Maltfield&action=history - ReqProtocol HTTP/1.1 - ReqHeader Host: wiki.opensourceecology.org - ReqHeader Connection: Keep-Alive - ReqHeader Proxy-Connection: Keep-Alive - ReqHeader User-Agent: MediaWiki/1.30.0 SquidPurgeClient - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_acl MATCH purge "localhost" - VCL_return purge - VCL_call HASH - VCL_return lookup - VCL_call PURGE - Debug "VCL_error(200, Purged)" - VCL_return synth - Timestamp Process: 1523574861.741277 0.000035 0.000035 - RespHeader Date: Thu, 12 Apr 2018 23:14:21 GMT - RespHeader Server: Varnish
- I loaded a wiki page in an ephemeral browser, updated it in my other logged-in browser, then reloaded it back in the distinct/ephemeral brower. I confirmed that the change came through in the distinct/ephemeral browser. So purging is working!
- I launched a fresh disposable vm ephemeral browser, loaded the page, got a miss, loaded the page again, got a hit. So caching is working for the Main_Page at least
- unfortunately, the page load included several GET requests that were not HITs
- /load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector
- /load.php?debug=false&lang=en&modules=mediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.sectionAnchor%7Cmediawiki.skinning.interface%7Cskins.vector.styles&only=styles&skin=vector
- /load.php?debug=false&lang=en&modules=startup&only=scripts&skin=vector
- /load.php?debug=false&lang=en&modules=jquery%2Cmediawiki&only=scripts&skin=vector&version=1ubqa9r
- /load.php?debug=false&lang=en&modules=jquery.accessKeyLabel%2CcheckboxShiftClick%2Cclient%2Ccookie%2CgetAttrs%2ChighlightText%2Cmw-jump%2Csuggestions%2CtabIndex%2Cthrottle-debounce%7Cmediawiki.RegExp%2Capi%2Ccookie%2Cnotify%2CsearchSuggest%2Cstorage%2Cto
- //load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector
- I did another refresh I caught the MISSES
- /load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector
- /load.php?debug=false&lang=en&modules=mediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.sectionAnchor%7Cmediawiki.skinning.interface%7Cskins.vector.styles&only=styles&skin=vector
- /load.php?debug=false&lang=en&modules=startup&only=scripts&skin=vector
- /load.php?debug=false&lang=en&modules=jquery.accessKeyLabel%2CcheckboxShiftClick%2Cclient%2Ccookie%2CgetAttrs%2ChighlightText%2Cmw-jump%2Csuggestions%2CtabIndex%2Cthrottle-debounce%7Cmediawiki.RegExp%2Capi%2Ccookie%2Cnotify%2CsearchSuggest%2Cstorage%2Cto
- so many of those are the same; let's isolate to the first one = "/load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector"
- umm, but a call from curl yielded a HIT
user@personal:~$ curl -i "https://wiki.opensourceecology.org/load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector" HTTP/1.1 200 OK Server: nginx Date: Fri, 13 Apr 2018 00:16:49 GMT Content-Type: text/css; charset=utf-8 Content-Length: 921 Connection: keep-alive X-Content-Type-Options: nosniff Access-Control-Allow-Origin: * ETag: W/"0vstmhv" Cache-Control: public, max-age=300, s-maxage=300 Expires: Thu, 12 Apr 2018 23:21:50 GMT X-XSS-Protection: 1; mode=block X-Varnish: 297412 426598 Age: 3599 Via: 1.1 varnish-v4 Accept-Ranges: bytes Strict-Transport-Security: max-age=15552001 Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report" .lang{background:#F9F9F9;border:1px solid #E9E9E9;font-size:smaller;margin:0 0 1em 0;padding:0.5em 1em;text-align:left}.lang ul{display:inline;margin-left:0;padding-left:0}.lang ul li{border-left:1px solid #E4E4E4;display:inline;list-style:none;margin-left:0;padding:0 0.5em}.lang ul li.lang_main{border-left:none;display:inline;list-style:none;margin-left:0;padding-left:0}.lang ul a.external{background:none ! important;padding-right:0 ! important}.lang ul li.lang_title{display:none}.dtree{font-family:Verdana,Geneva,Arial,Helvetica,sans-serif;font-size:11px;color:#666;white-space:nowrap}.dtree img{border:0px;vertical-align:middle}.dtree a{color:#333;text-decoration:none}.dtree a.node,.dtree a.nodeSel{white-space:nowrap;padding:1px 2px 1px 2px}.dtree a.node:hover,.dtree a.nodeSel:hover{color:#333;text-decoration:underline}.dtree a.nodeSel{background-color:#c0d2ec}.dtree .clip{overflow:hidden;padding-bottom:1px}user@personal:~$* << Request >> 297412 - Begin req 297411 rxreq - Timestamp Start: 1523578609.269812 0.000000 0.000000 - Timestamp Req: 1523578609.269812 0.000000 0.000000 - ReqStart 127.0.0.1 36238 - ReqMethod GET - ReqURL /load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector - ReqProtocol HTTP/1.0 - ReqHeader X-Real-IP: 76.97.223.185 - ReqHeader X-Forwarded-For: 76.97.223.185 - ReqHeader X-Forwarded-Proto: https - ReqHeader X-Forwarded-Port: 443 - ReqHeader Host: wiki.opensourceecology.org - ReqHeader Connection: close - ReqHeader User-Agent: curl/7.38.0 - ReqHeader Accept: */* - ReqUnset X-Forwarded-For: 76.97.223.185 - ReqHeader X-Forwarded-For: 76.97.223.185, 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 76.97.223.185, 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 426598 - VCL_call HIT - VCL_return deliver - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Thu, 12 Apr 2018 23:16:50 GMT - RespHeader Server: Apache - RespHeader X-Content-Type-Options: nosniff - RespHeader Access-Control-Allow-Origin: * - RespHeader ETag: W/"0vstmhv" - RespHeader Cache-Control: public, max-age=300, s-maxage=300 - RespHeader Expires: Thu, 12 Apr 2018 23:21:50 GMT - RespHeader X-XSS-Protection: 1; mode=block - RespHeader Content-Length: 921 - RespHeader Content-Type: text/css; charset=utf-8 - RespHeader X-Varnish: 297412 426598 - RespHeader Age: 3599 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1523578609.269849 0.000036 0.000036 - Debug "RES_MODE 2" - RespHeader Connection: close - RespHeader Accept-Ranges: bytes - Timestamp Resp: 1523578609.269870 0.000057 0.000021 - Debug "XXX REF 2" - ReqAcct 288 0 288 438 921 1359 - End
- but when I call the same thing from my browser, I get a PASS & fetch
* << Request >> 201266 - Begin req 201265 rxreq - Timestamp Start: 1523578751.880694 0.000000 0.000000 - Timestamp Req: 1523578751.880694 0.000000 0.000000 - ReqStart 127.0.0.1 36486 - ReqMethod GET - ReqURL /load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector - ReqProtocol HTTP/1.0 - ReqHeader X-Real-IP: 76.97.223.185 - ReqHeader X-Forwarded-For: 76.97.223.185 - ReqHeader X-Forwarded-Proto: https - ReqHeader X-Forwarded-Port: 443 - ReqHeader Host: wiki.opensourceecology.org - ReqHeader Connection: close - ReqHeader User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader Accept-Language: en-US,en;q=0.5 - ReqHeader Accept-Encoding: gzip, deflate, br - ReqHeader Upgrade-Insecure-Requests: 1 - ReqHeader If-None-Match: W/"1a4906v" - ReqUnset X-Forwarded-For: 76.97.223.185 - ReqHeader X-Forwarded-For: 76.97.223.185, 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 76.97.223.185, 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_return pass - VCL_call HASH - VCL_return lookup - VCL_call PASS - VCL_return fetch - Link bereq 201267 pass - Timestamp Fetch: 1523578751.993003 0.112309 0.112309 - RespProtocol HTTP/1.0 - RespStatus 304 - RespReason Not Modified - RespHeader Date: Fri, 13 Apr 2018 00:19:11 GMT - RespHeader Server: Apache - RespHeader ETag: W/"1a4906v" - RespHeader Expires: Fri, 13 Apr 2018 00:24:11 GMT - RespHeader Cache-Control: public, max-age=300, s-maxage=300 - RespProtocol HTTP/1.1 - RespHeader X-Varnish: 201266 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1523578751.993043 0.112350 0.000040 - Debug "RES_MODE 0" - RespHeader Connection: close - Timestamp Resp: 1523578751.993091 0.112397 0.000048 - Debug "XXX REF 1" - ReqAcct 540 0 540 258 0 258 - End
- I think I just crossed the line of diminished return. The pages themselves are definitely being cached. Images are being cached. Maybe some minification resources are not being cached. I'm confident that the site will run fine after cutover. Then, once it's live in prod, I'll get munin graphs & varnish logs to show me which requests are *not* hits, and I can optimize from there (as was done with osemain by removing the Fundraising addon post migration ).
Wed Apr 11, 2018
- I haven't heard back from dreamhost, and they haven't deleted what remains of our backup data yet.
- checked the state of storage on dreamhost, and found our old backups dir has 52G. Our new backups dirs have 44G.
hancock% date Wed Apr 11 19:40:17 PDT 2018 hancock% pwd /home/marcin_ose hancock% du -sh hetzner1/* 12G hetzner1/20180409-052001 12G hetzner1/20180410-052001 12G hetzner1/20180411-052001 hancock% du -sh hetzner2/* 2.8G hetzner2/20180409-072001 2.8G hetzner2/20180410-072001 2.8G hetzner2/20180411-072001 hancock% du -sh backups/hetzner1/* 248M backups/hetzner1/20180402-052001 0 backups/hetzner1/20180406-052001 12G backups/hetzner1/20180407-052001 12G backups/hetzner1/20180408-052001 hancock% du -sh backups/hetzner2/* 0 backups/hetzner2/20180406-072001 14G backups/hetzner2/20180407-072001 14G backups/hetzner2/20180408-072001 hancock%
- since we have 3x copies of daily backups in the new dir, I just went ahead and deleted the 52G remaining from the old daily backup dir
hancock% date Wed Apr 11 19:43:20 PDT 2018 hancock% pwd /home/marcin_ose/backups hancock% du -sh hetzner1/* 248M hetzner1/20180402-052001 0 hetzner1/20180406-052001 12G hetzner1/20180407-052001 12G hetzner1/20180408-052001 hancock% du -sh hetzner2/* 0 hetzner2/20180406-072001 14G hetzner2/20180407-072001 14G hetzner2/20180408-072001 hancock% rm -rf hetzner1/* zsh: sure you want to delete all the files in /home/marcin_ose/backups/hetzner1 [yn]? y hancock% rm -rf hetzner2/* zsh: sure you want to delete all the files in /home/marcin_ose/backups/hetzner2 [yn]? y hancock% ls -lah hetzner1/ total 4.0K drwxr-xr-x 2 marcin_ose pg1589252 10 Apr 11 19:43 . drwxr-xr-x 4 marcin_ose pg1589252 4.0K Apr 9 13:20 .. hancock% ls -lah hetzner2/ total 4.0K drwxr-xr-x 2 marcin_ose pg1589252 10 Apr 11 19:43 . drwxr-xr-x 4 marcin_ose pg1589252 4.0K Apr 9 13:20 .. hancock%
- now our entire home dir's usage is 47G!
hancock% date Wed Apr 11 19:46:23 PDT 2018 hancock% pwd /home/marcin_ose hancock% du -sh 47G . hancock%
- I expect that once I start working on the wiki again, backups will grow to probably 12*4 + 20*4 = ~128G. Then, after the wiki is migrated, the hetzner1 backups will become negligible (it's mostly just the wiki), so we'll only have ~20*4 = 80G of backups to store. The long-term solution is to migrate to s3 with lifecycle policies on the first-of-the-month daily going into glacier and all other backups getting automatically deleted a few days after upload. The cost difference of this s3 solution between 128G in s3 vs 80G in s3 may be such that I should focus on getting this wiki migrated before recoding our backup scripts to go to s3. While hoping that dreamhost doesn't notice our daily backups (which are significantly smaller than their >500G usage before) have just moved to another dir.
- wow, within a few minutes from deleting the big directories in the 'backups' dir, I got an email from "Jin K" at dreamhost stating that our "Action Required: Disk usage warning - acceptable use policy violation" ticket was "RESOLVED!", thanking us for taking care of it. Either that response was automated or I got it just before they deleted it for us
Hi there! It looks like we've applied more time for you previously regarding the backup location that was left, but it looks like you've cleared that up since then as the location below is now empty: hancock:/home/marcin_ose/backups# 96K . Thanks for getting that done! This notice is just to let you know that we're all set and this matter is now closed. Please give us a shout at any time if you have any questions or concerns at all moving forward. We'd be happy to help! Thank you kindly, Jin K.Mon Apr 09, 2018
- I confirmed that the backups from last night came into their new location
hancock% du -sh ../hetzner1/* 12G ../hetzner1/20180409-052001 hancock% du -sh ../hetzner2/* 2.8G ../hetzner2/20180409-072001 hancock%
- I deleted the encryption key from dreamhost's server. Future backups can be done on hetzner's servers directly.
hancock% chmod 0700 ose-backups-cron.key hancock% shred -u ose-backups-cron.key hancock%
- now the our home dir's entire usage is currently 121G
hancock% date Mon Apr 9 13:18:36 PDT 2018 hancock% pwd /home/marcin_ose hancock% du -sh 121G . hancock%
- 104G of that is going to be automatically deleted by the cron over the next week as the dailys become stale
hancock% du -sh backups/* 4.0K backups/getGlacierJob.sh 48G backups/hetzner1 56G backups/hetzner2 4.0K backups/output.json 4.0K backups/readme.txt 64K backups/retryUploadToGlacier.log 4.0K backups/retryUploadToGlacier.sh 28M backups/uploadToGlacier 4.0K backups/uploadToGlacier.py 8.0K backups/uploadToGlacier.sh hancock% du -sh backups 104G backups hancock%
- I deleted the entire uploadToGlacier directory, which only had fileLists that failed to delete due to a minor bug in my script
hancock% du -sh uploadToGlacier/* 2.4M uploadToGlacier/hetzner1_20170701-052001.fileList.txt.bz2 2.4M uploadToGlacier/hetzner1_20170801-052001.fileList.txt.bz2 2.3M uploadToGlacier/hetzner1_20171201-062001.fileList.txt.bz2 2.3M uploadToGlacier/hetzner1_20180101-062001.fileList.txt.bz2 2.4M uploadToGlacier/hetzner1_20180201-062001.fileList.txt.bz2 2.4M uploadToGlacier/hetzner1_20180301-062002.fileList.txt.bz2 2.2M uploadToGlacier/hetzner1_20180401-052001.fileList.txt.bz2 2.0M uploadToGlacier/hetzner2_20170702-052001.fileList.txt.bz2 196K uploadToGlacier/hetzner2_20170801-072001.fileList.txt.bz2 284K uploadToGlacier/hetzner2_20170901-072001.fileList.txt.bz2 648K uploadToGlacier/hetzner2_20171001-072001.fileList.txt.bz2 276K uploadToGlacier/hetzner2_20171101-072001.fileList.txt.bz2 308K uploadToGlacier/hetzner2_20171202-072001.fileList.txt.bz2 488K uploadToGlacier/hetzner2_20180102-072001.fileList.txt.bz2 2.4M uploadToGlacier/hetzner2_20180202-072001.fileList.txt.bz2 3.4M uploadToGlacier/hetzner2_20180302-072001.fileList.txt.bz2 1.6M uploadToGlacier/hetzner2_20180401-072001.fileList.txt.bz2 hancock% rm -rf uploadToGlacier hancock%
- I updated the crontab to cleanBackups from the new backup dir as well
hancock% crontab -l ###--- BEGIN DREAMHOST BLOCK ###--- Changes made to this part of the file WILL be destroyed! # Backup site-root MAILTO="elifarley@gmail.com" @weekly /usr/local/bin/setlock -n /tmp/cronlock.2671804.96324 sh -c $'. \176/altroot/init.sh \012\043 \012\176/bin/mbkp.sh site-root' # Backup MONTHLY MAILTO="elifarley@gmail.com" @monthly /usr/local/bin/setlock -n /tmp/cronlock.2671804.96354 sh -c $'. \176/altroot/init.sh \012\043 \012\176/bin/mbkp.sh home \012\176/bin/mbkp.sh altroot \012\043\176/bin/mbkp.sh blog-cache \012' # delete older backup files 20 22 * * * /usr/bin/perl /home/marcin_ose/bin/cleanLocal.pl -l /home/marcin_ose/backups/hetzner1 -d 3 &>> /home/marcin_ose/logs/cleanBackups.log 20 22 * * * /usr/bin/perl /home/marcin_ose/bin/cleanLocal.pl -l /home/marcin_ose/backups/hetzner2 -d 3 &>> /home/marcin_ose/logs/cleanBackups.log ###--- You can make changes below the next line and they will be preserved! ###--- END DREAMHOST BLOCK hancock% hancock% crontab -e ... hancock% crontab -l ###--- BEGIN DREAMHOST BLOCK ###--- Changes made to this part of the file WILL be destroyed! # Backup site-root MAILTO="elifarley@gmail.com" @weekly /usr/local/bin/setlock -n /tmp/cronlock.2671804.96324 sh -c $'. \176/altroot/init.sh \012\043 \012\176/bin/mbkp.sh site-root' # Backup MONTHLY MAILTO="elifarley@gmail.com" @monthly /usr/local/bin/setlock -n /tmp/cronlock.2671804.96354 sh -c $'. \176/altroot/init.sh \012\043 \012\176/bin/mbkp.sh home \012\176/bin/mbkp.sh altroot \012\043\176/bin/mbkp.sh blog-cache \012' # delete older backup files 20 22 * * * /usr/bin/perl /home/marcin_ose/bin/cleanLocal.pl -l /home/marcin_ose/backups/hetzner1 -d 3 &>> /home/marcin_ose/logs/cleanBackups.log 20 22 * * * /usr/bin/perl /home/marcin_ose/bin/cleanLocal.pl -l /home/marcin_ose/backups/hetzner2 -d 3 &>> /home/marcin_ose/logs/cleanBackups.log 20 22 * * * /usr/bin/perl /home/marcin_ose/bin/cleanLocal.pl -l /home/marcin_ose/hetzner1 -d 3 &>> /home/marcin_ose/logs/cleanBackups.log 20 22 * * * /usr/bin/perl /home/marcin_ose/bin/cleanLocal.pl -l /home/marcin_ose/hetzner2 -d 3 &>> /home/marcin_ose/logs/cleanBackups.log ###--- You can make changes below the next line and they will be preserved! ###--- END DREAMHOST BLOCK hancock%Sun Apr 08, 2018
- I checked again just after midnight; the retry appears to have worked pretty great. Just 2 archives failed on this run
hancock% date Sat Apr 7 22:14:46 PDT 2018 hancock% pwd /home/marcin_ose/backups hancock% du -sh uploadToGlacier/*.gpg 39G uploadToGlacier/hetzner1_20170801-052001.tar.gpg 12G uploadToGlacier/hetzner1_20180101-062001.tar.gpg hancock%
- the archive list on hetzner2 looks pretty great too
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 hetzner1_20170701-052001.fileList.txt.bz2.gpg hetzner1_20170801-052001.fileList.txt.bz2.gpg hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates hetzner1_20171001-052001.fileList.txt.bz2.gpg hetzner1_20171001-052001.tar.gpg hetzner1_20171101-062001.fileList.txt.bz2.gpg hetzner1_20171101-062001.tar.gpg hetzner1_20171201-062001.fileList.txt.bz2.gpg hetzner1_20180101-062001.fileList.txt.bz2.gpg hetzner1_20180201-062001.fileList.txt.bz2.gpg hetzner1_20180201-062001.tar.gpg hetzner1_20180301-062002.fileList.txt.bz2.gpg hetzner1_20180301-062002.tar.gpg hetzner1_20180401-052001.fileList.txt.bz2.gpg hetzner1_20180401-052001.tar.gpg hetzner2_20170702-052001.fileList.txt.bz2.gpg hetzner2_20170702-052001.tar.gpg hetzner2_20170801-072001.fileList.txt.bz2.gpg hetzner2_20170801-072001.tar.gpg hetzner2_20170901-072001.fileList.txt.bz2.gpg hetzner2_20170901-072001.tar.gpg hetzner2_20171001-072001.fileList.txt.bz2.gpg hetzner2_20171001-072001.tar.gpg hetzner2_20171101-072001.fileList.txt.bz2.gpg hetzner2_20171101-072001.tar.gpg hetzner2_20171202-072001.fileList.txt.bz2.gpg hetzner2_20171202-072001.tar.gpg hetzner2_20180102-072001.fileList.txt.bz2.gpg hetzner2_20180102-072001.tar.gpg hetzner2_20180202-072001.fileList.txt.bz2.gpg hetzner2_20180302-072001.fileList.txt.bz2.gpg hetzner2_20180401-072001.fileList.txt.bz2.gpg hetzner2_20180401-072001.tar.gpg id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]#
- I kicked off a fresh inventory fetch; hopefully that'll get the ones that I just uploaded on retry
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 vault sync --max-age=0 --wait deleteMeIn2020
- meanwhile, back on dreamhost's hancock, I kicked off the re-upload attempt for the remaining 2x archives
- I also checked the aws console; the bill for April so far is $3.32. Not bad!
- most of that is $3.25 for 65,038 requests.
- Unfortunately, I discovered that the AWS Budget service is itself not free. We apparently have $0 charges because there's 62 days of free Budget service in the Free Tier.
- according to their docs, the first 2x budgets are free of charge. Additional budgets are $0.02/day. I'll leave our $10 budget email. If we get charged >$0 for it ever, I'll delete it.
- I changed the existing budget from $1 to $8 (so $96/yr). I added an alert for both exceeds actual & foretasted amounts. fwiw, we're currently only being charged $2.28, but the forecast is $9.25. I assume that's expecting us to keep uploading at our current rate all month, which won't happen..
- ...
- when I woke up, the 2x remaining uploads completed successfully!
- I checked on the archive list, but it still didn't show the complete list; so I kicked off another inventory refresh
- after the inventory shows all the archives, I'll delete them from dreamhost. Tomorrow is the deadline, so hopefully this can be done today.
- I got an email that the projected budget was to exceed the $8 budget. The actual budget is still $3.97.
- ...
- a few hours later, the inventory sync was complete, and many of the archives were now listed
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 hetzner1_20170701-052001.fileList.txt.bz2.gpg hetzner1_20170701-052001.tar.gpg hetzner1_20170801-052001.fileList.txt.bz2.gpg hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates hetzner1_20171001-052001.fileList.txt.bz2.gpg hetzner1_20171001-052001.tar.gpg hetzner1_20171101-062001.fileList.txt.bz2.gpg hetzner1_20171101-062001.tar.gpg id:lryfyQFE4NbtWg5Q6uTq8Qqyc-y9il9WYe7lHs8H2lzFSBADOJQmCIgp6FxrkiaCcwnSMIReJPWyWcR4UOnurxwONhw8fojEHQTTeOpkf6fgfWBAPP9P6GOZZ0v8d8Jz_-QFVaV6Bw hetzner1_20171201-062001.fileList.txt.bz2.gpg id:NR3Z9zdD2rW0NG1y3QW735TzykIivP_cnFDMCNX6RcIPh0mRb_6QiC5qy1GrBTIoroorfzaGDIKQ0BY18jbcR3XfEzfcmrZ1FiT1YvQw-c1ag6vT46-noPvmddZ_zyy2O1ItIygI6Q hetzner1_20171201-062001.fileList.txt.bz2.gpg hetzner1_20171201-062001.tar.gpg hetzner1_20180101-062001.fileList.txt.bz2.gpg hetzner1_20180201-062001.fileList.txt.bz2.gpg hetzner1_20180201-062001.tar.gpg hetzner1_20180301-062002.fileList.txt.bz2.gpg hetzner1_20180301-062002.tar.gpg hetzner1_20180401-052001.fileList.txt.bz2.gpg hetzner1_20180401-052001.tar.gpg hetzner2_20170702-052001.fileList.txt.bz2.gpg hetzner2_20170702-052001.tar.gpg hetzner2_20170801-072001.fileList.txt.bz2.gpg hetzner2_20170801-072001.tar.gpg hetzner2_20170901-072001.fileList.txt.bz2.gpg hetzner2_20170901-072001.tar.gpg hetzner2_20171001-072001.fileList.txt.bz2.gpg hetzner2_20171001-072001.tar.gpg hetzner2_20171101-072001.fileList.txt.bz2.gpg hetzner2_20171101-072001.tar.gpg hetzner2_20171202-072001.fileList.txt.bz2.gpg hetzner2_20171202-072001.tar.gpg hetzner2_20180102-072001.fileList.txt.bz2.gpg hetzner2_20180102-072001.tar.gpg hetzner2_20180202-072001.fileList.txt.bz2.gpg hetzner2_20180202-072001.tar.gpg hetzner2_20180302-072001.fileList.txt.bz2.gpg hetzner2_20180302-072001.tar.gpg hetzner2_20180401-072001.fileList.txt.bz2.gpg hetzner2_20180401-072001.tar.gpg id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]#
- we can see that these archives appear obviously to be in the glacier vault (both encrypted tarballs are present & named as exptected)
hetzner1_20170701-052001 hetzner1_20171001-052001 hetzner1_20171101-062001 hetzner1_20180201-062001 hetzner1_20180301-062002 hetzner1_20180401-052001 hetzner2_20170702-052001 hetzner2_20170801-072001 hetzner2_20170901-072001 hetzner2_20171001-072001 hetzner2_20171101-072001 hetzner2_20171202-072001 hetzner2_20180102-072001 hetzner2_20180202-072001 hetzner2_20180302-072001 hetzner2_20180401-072001
- but there's a couple more that have an odd naming convention and/or duplicate archives from when I was testing the upload script; these also appear to be in the glacier vault:
hetzner1_20170901-052001 hetzner1_20171201-062001
- finally, there's a couple where I didn't include the file name in the archive description, from my very early testing--back before I realized the archive name wouldn't be remembered by glacier, and that it would only be given a reference uid. from my notes, I know that these archives correspond to:
hetzner1_20171001-052001
- therefore, the following archives have been fully uploaded into glacier, and they can be deleted from dreamhost:
hetzner1_20170701-052001 hetzner1_20170901-052001 hetzner1_20171001-052001 hetzner1_20171101-062001 hetzner1_20171201-062001 hetzner1_20180201-062001 hetzner1_20180301-062002 hetzner1_20180401-052001 hetzner2_20170702-052001 hetzner2_20170801-072001 hetzner2_20170901-072001 hetzner2_20171001-072001 hetzner2_20171101-072001 hetzner2_20171202-072001 hetzner2_20180102-072001 hetzner2_20180202-072001 hetzner2_20180302-072001 hetzner2_20180401-072001
- I kicked-off the deletions
hancock% rm -rf hetzner1/20170701-052001 hancock% rm -rf hetzner1/20170701-052001 hancock% rm -rf hetzner1/20171001-052001 hancock% rm -rf hetzner1/20171101-062001 hancock% rm -rf hetzner1/20180201-062001 hancock% rm -rf hetzner1/20180301-062002 hancock% rm -rf hetzner1/20180401-052001 hancock% rm -rf hetzner2/20170702-052001 hancock% rm -rf hetzner2/20170801-072001 hancock% rm -rf hetzner2/20170901-072001 hancock% rm -rf hetzner2/20171001-072001 hancock% rm -rf hetzner2/20171101-072001 hancock% rm -rf hetzner2/20171202-072001 hancock% rm -rf hetzner2/20180102-072001 hancock% rm -rf hetzner2/20180202-072001 hancock% rm -rf hetzner2/20180302-072001 hancock% rm -rf hetzner2/20180401-072001 hancock% rm -rf hetzner1/20170901-052001 hancock% rm -rf hetzner1/20171201-062001 hancock% rm -rf hetzner1/20171001-052001 hancock%
- and here's what remains
hancock% du -sh hetzner1/* 39G hetzner1/20170801-052001 12G hetzner1/20180101-062001 248M hetzner1/20180402-052001 0 hetzner1/20180403-052001 12G hetzner1/20180404-052001 12G hetzner1/20180405-052001 12G hetzner1/20180406-052001 12G hetzner1/20180407-052001 12G hetzner1/20180408-052001 hancock% du -sh hetzner2/* 0 hetzner2/20180403-072001 14G hetzner2/20180404-072001 14G hetzner2/20180405-072001 14G hetzner2/20180406-072001 14G hetzner2/20180407-072001 14G hetzner2/20180408-072001 hancock%
- so that finishes off hetzner2. The backups that are present are just the recent few days worth (eventually dailys may have to go to s3, but for now I'm primarily focused on shipping our historical monthlies off to somewhere safe = glacier)
- hetzner1 has 2x remaining archives that need to be confirmed in glacier, then deleted from dreamhost:
hancock% du -sh hetzner1/* 39G hetzner1/20170801-052001 12G hetzner1/20180101-062001
- unfortunately, the hetzner2 backups have exploded to 14G again; they should be ~3G each. Looks like I still had an 'orig' copy of the archives I restored when testing glacier in the /root/glacierRestore directory. The '/root'/ directory is itself backed-up.
- I updated the "restore from glacier" documentation on the wiki so that the 'glacier-cli' dir is placed in '/root/sandbox/', the 'glacier.py' binary is linked to by '/root/bin/glacier.py' (/root/bin is already in $PATH), and that the restores themselves get done in a temporary directory in /var/tmp/
- I updated the upload path in '/root/backups/backup.settings' to be '/home/marcin_ose/hetzner2' instead of '/home/marcin_ose/backups/ hetzner2'
- I updated the upload path in '/usr/home/osemain/backups/backup.settings' to be '/home/marcin_ose/hetzner1' instead of '/home/marcin_ose/backups/hetzner1'
- ...
- I checked again just before midnight, and here's the new listing
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 hetzner1_20170701-052001.fileList.txt.bz2.gpg hetzner1_20170701-052001.tar.gpg hetzner1_20170801-052001.fileList.txt.bz2.gpg hetzner1_20170801-052001.tar.gpg hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates hetzner1_20171001-052001.fileList.txt.bz2.gpg hetzner1_20171001-052001.tar.gpg hetzner1_20171101-062001.fileList.txt.bz2.gpg hetzner1_20171101-062001.tar.gpg id:lryfyQFE4NbtWg5Q6uTq8Qqyc-y9il9WYe7lHs8H2lzFSBADOJQmCIgp6FxrkiaCcwnSMIReJPWyWcR4UOnurxwONhw8fojEHQTTeOpkf6fgfWBAPP9P6GOZZ0v8d8Jz_-QFVaV6Bw hetzner1_20171201-062001.fileList.txt.bz2.gpg id:NR3Z9zdD2rW0NG1y3QW735TzykIivP_cnFDMCNX6RcIPh0mRb_6QiC5qy1GrBTIoroorfzaGDIKQ0BY18jbcR3XfEzfcmrZ1FiT1YvQw-c1ag6vT46-noPvmddZ_zyy2O1ItIygI6Q hetzner1_20171201-062001.fileList.txt.bz2.gpg hetzner1_20171201-062001.tar.gpg hetzner1_20180101-062001.fileList.txt.bz2.gpg hetzner1_20180101-062001.tar.gpg hetzner1_20180201-062001.fileList.txt.bz2.gpg hetzner1_20180201-062001.tar.gpg hetzner1_20180301-062002.fileList.txt.bz2.gpg hetzner1_20180301-062002.tar.gpg hetzner1_20180401-052001.fileList.txt.bz2.gpg hetzner1_20180401-052001.tar.gpg hetzner2_20170702-052001.fileList.txt.bz2.gpg hetzner2_20170702-052001.tar.gpg hetzner2_20170801-072001.fileList.txt.bz2.gpg hetzner2_20170801-072001.tar.gpg hetzner2_20170901-072001.fileList.txt.bz2.gpg hetzner2_20170901-072001.tar.gpg hetzner2_20171001-072001.fileList.txt.bz2.gpg hetzner2_20171001-072001.tar.gpg hetzner2_20171101-072001.fileList.txt.bz2.gpg hetzner2_20171101-072001.tar.gpg hetzner2_20171202-072001.fileList.txt.bz2.gpg hetzner2_20171202-072001.tar.gpg hetzner2_20180102-072001.fileList.txt.bz2.gpg hetzner2_20180102-072001.tar.gpg hetzner2_20180202-072001.fileList.txt.bz2.gpg hetzner2_20180202-072001.tar.gpg hetzner2_20180302-072001.fileList.txt.bz2.gpg hetzner2_20180302-072001.tar.gpg hetzner2_20180401-072001.fileList.txt.bz2.gpg hetzner2_20180401-072001.tar.gpg id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]#
- that includes the following, which are the 2x archives were previously absent
hetzner1/20170801-052001 hetzner1/20180101-062001
- I deleted those archives from dreamhost
rm -rf hetzner1/20170801-052001 rm -rf hetzner1/20180101-062001
- that leaves only recent daily archives
hancock% du -sh hetzner1/* 248M hetzner1/20180402-052001 0 hetzner1/20180403-052001 12G hetzner1/20180404-052001 12G hetzner1/20180405-052001 12G hetzner1/20180406-052001 12G hetzner1/20180407-052001 12G hetzner1/20180408-052001 hancock% du -sh hetzner2/* 0 hetzner2/20180403-072001 14G hetzner2/20180404-072001 14G hetzner2/20180405-072001 14G hetzner2/20180406-072001 14G hetzner2/20180407-072001 14G hetzner2/20180408-072001 hancock%Sat Apr 07, 2018
- checked dreamhost; the screen died (damn dreamhost), so I can't see the last command's output (shoulda sent it to a log file..)
- anyway, I can tell which files appeared to have failed from the gpg files in the dir
hancock% date Sat Apr 7 08:27:09 PDT 2018 hancock% pwd /home/marcin_ose/backups/uploadToGlacier hancock% ls -lah *.gpg -rw-r--r-- 1 marcin_ose pg1589252 39G Apr 4 18:59 hetzner1_20170701-052001.tar.gpg -rw-r--r-- 1 marcin_ose pg1589252 39G Apr 4 22:13 hetzner1_20170801-052001.tar.gpg -rw-r--r-- 1 marcin_ose pg1589252 2.3M Apr 3 16:15 hetzner1_20171201-062001.fileList.txt.bz2.gpg -rw-r--r-- 1 marcin_ose pg1589252 12G Apr 3 16:37 hetzner1_20171201-062001.tar.gpg -rw-r--r-- 1 marcin_ose pg1589252 12G Apr 5 00:51 hetzner1_20180101-062001.tar.gpg -rw-r--r-- 1 marcin_ose pg1589252 14G Apr 4 11:00 hetzner2_20180202-072001.tar.gpg -rw-r--r-- 1 marcin_ose pg1589252 25G Apr 4 12:39 hetzner2_20180302-072001.tar.gpg hancock%
- I can't confirm as the inventory is stale, but I kicked-off a sync
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates hetzner1_20171001-052001.fileList.txt.bz2.gpg hetzner1_20171001-052001.tar.gpg hetzner1_20171101-062001.fileList.txt.bz2.gpg hetzner1_20171101-062001.tar.gpg hetzner1_20171201-062001.fileList.txt.bz2.gpg id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]# vault sync deleteMeIn2020 glacier: queued inventory job for u'deleteMeIn2020' [root@hetzner2 glacier-cli]#
- so this is the list of failed backups that I need to retry. It's ~150G, which is too big to stash on hetzner2; hopefully this retry knocks it below 90G, which is how much free space we have on hetzner2
hancock% du -sh *.gpg 39G hetzner1_20170701-052001.tar.gpg 39G hetzner1_20170801-052001.tar.gpg 2.3M hetzner1_20171201-062001.fileList.txt.bz2.gpg 12G hetzner1_20171201-062001.tar.gpg 12G hetzner1_20180101-062001.tar.gpg 14G hetzner2_20180202-072001.tar.gpg 25G hetzner2_20180302-072001.tar.gpg hancock%
- the most concerning is the first backups I made on the hetzner1 server on 201707. I made that backup just before I deleted anything, so we really, really need that in glacier.
- I copied the uploadToGlacier.sh script to a modified new script named retryUploadToGlaicer.sh
hancock% cat retryUploadToGlacier.sh #!/bin/bash -x ############ # SETTINGS # ############ backupArchives="uploadToGlacier/hetzner1_20170701-052001.tar.gpg uploadToGlacier/hetzner1_20170801-052001.tar.gpg uploadToGlacier/hetzner1_20171201-062001.fileList.txt.bz2.gpg uploadToGlacier/hetzner1_20171201-062001.tar.gpg uploadToGlacier/hetzner1_20180101-062001.tar.gpg uploadToGlacier/hetzner2_20180202-072001.tar.gpg uploadToGlacier/hetzner2_20180302-072001.tar.gpg" export AWS_ACCESS_KEY_ID='CHANGEME' export AWS_SECRET_ACCESS_KEY='CHANGEME' ############## # DO UPLOADS # ############## for archive in $(echo $backupArchives); do # upload it /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 "${archive}" if $? -eq 0 ; then rm -f "${archive}" fi done hancock%
- kicked off the script, this time logging to a file
hancock% ./retryUploadToGlacier.sh &> retryUploadToGlacier.log
- it's uploading the 201707 archive to hetzner1
hancock% tail -f retryUploadToGlacier.log + backupArchives='uploadToGlacier/hetzner1_20170701-052001.tar.gpg uploadToGlacier/hetzner1_20170801-052001.tar.gpg uploadToGlacier/hetzner1_20171201-062001.fileList.txt.bz2.gpg uploadToGlacier/hetzner1_20171201-062001.tar.gpg uploadToGlacier/hetzner1_20180101-062001.tar.gpg uploadToGlacier/hetzner2_20180202-072001.tar.gpg uploadToGlacier/hetzner2_20180302-072001.tar.gpg' + export AWS_ACCESS_KEY_ID=CHANGEME + AWS_ACCESS_KEY_ID=CHANGEME + export AWS_SECRET_ACCESS_KEY=CHANGEME + AWS_SECRET_ACCESS_KEY=CHANGEME ++ echo uploadToGlacier/hetzner1_20170701-052001.tar.gpg uploadToGlacier/hetzner1_20170801-052001.tar.gpg uploadToGlacier/hetzner1_20171201-062001.fileList.txt.bz2.gpg uploadToGlacier/hetzner1_20171201-062001.tar.gpg uploadToGlacier/hetzner1_20180101-062001.tar.gpg uploadToGlacier/hetzner2_20180202-072001.tar.gpg uploadToGlacier/hetzner2_20180302-072001.tar.gpg + for archive in '$(echo $backupArchives)' + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 uploadToGlacier/hetzner1_20170701-052001.tar.gpg
- I'll check on this in the evening
Thr Apr 05, 2018
- the backup script is still running on the dreamhost server
- It's currently on 'hetzner1_20180401-052001', which is the last one!
- It looks like some uploads have succeeded & some have failed. the staging dir = 'uploadToGlacier/', which has all the encrypted archives (those that failed intentionally don't get deleted so I can re-try them later) has swelled to 150G. I already deleted ~250G, so I think dreamhost can deal with that (at least for 4 more days, which is our deadline)
- I got an email saying that our budget of $1/mo was exceeded with our aws account. Good to know that works! I'll increase that to $10 soon
- I'll come back to this tomorrow, after the script has finished running. Then I'll try uploading the remaining files sequentially in a loop, and hopefully they'll all be done by our deadline
- Marcin emailed me regarding broken images on the workshops page of osemain https://www.opensourceecology.org/workshops-and-programs/ .
- I can't fix this now as my phone is broken & I have to rebuild it + restore my 2fa tokens to login, but I told him the fix was to replace the absolute ulrs with the wrong protocol (http) with just a relative path to the image which is more robust
Wed Apr 04, 2018
- the upload of the archive 'hetzner1_20171201-062001.tar.gpg' failed again last night!
+ tar -cvf /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar hetzner1/20171201-062001/ hetzner1/20171201-062001/ hetzner1/20171201-062001/public_html/ hetzner1/20171201-062001/public_html/public_html.20171201-062001.tar.bz2 + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar.gpg Traceback (most recent call last): File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 736, in <module> main() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 732, in main App().main() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 718, in main self.args.func() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 500, in archive_upload file_obj=self.args.file, description=name) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/vault.py", line 178, in create_archive_from_file writer.write(data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 219, in write self.partitioner.write(data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 61, in write self._send_part() File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 75, in _send_part self.send_fn(part) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 222, in _upload_part self.uploader.upload_part(self.next_part_index, part_data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 129, in upload_part content_range, part_data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/layer1.py", line 637, in upload_part response_headers=response_headers) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/layer1.py", line 84, in make_request raise UnexpectedHTTPResponseError(ok_responses, response) boto.glacier.exceptions.UnexpectedHTTPResponseError: Expected 204, got (408, code=RequestTimeoutException, message=Request timed out.) + 1 -eq 0 marcin_ose@hancock:~/backups$
- the good news is that my new if statement prevented it from being deleted
hancock% ls -lah uploadToGlacier total 12G drwxr-xr-x 2 marcin_ose pg1589252 4.0K Apr 3 16:37 . drwxr-xr-x 5 marcin_ose pg1589252 4.0K Apr 3 15:37 .. -rw-r--r-- 1 marcin_ose pg1589252 2.3M Apr 3 16:14 hetzner1_20171201-062001.fileList.txt.bz2 -rw-r--r-- 1 marcin_ose pg1589252 2.3M Apr 3 16:15 hetzner1_20171201-062001.fileList.txt.bz2.gpg -rw-r--r-- 1 marcin_ose pg1589252 12G Apr 3 16:37 hetzner1_20171201-062001.tar.gpg hancock%
- I found a lot of people complain about this timeout issue. One post mentioned that we could increase the boto num_retries from the default (None) to some small number, but someone responded saying it didn't help them.. https://github.com/uskudnik/amazon-glacier-cmd-interface/issues/171
- err, I checked the documentation, and it shows the default is actually '5', not '0' http://docs.pythonboto.org/en/latest/boto_config_tut.html#boto
- I am concerned that these 408 errors & all our retries will result in higher bills. Even if only because of higher requests because--say--an upload that would require 1,000 requests now requires 1,500 requests because of the retry.
- I checked the aws console, and our bill for March is $0.41.
- Not bad considering we did a restore of 12G of data. It lists a $0.01 fee for restoring 1.187G of data. Indeed, I restored ~12G of data, but it looks like aws free tier permits 10G per month of data retrievals. So we got that restore test for essentially free; awesome.
- our largest spend is 7,797 requests at $0.039. I'm afraid that increasing the chunk size to decrease the # of requests may increase the timeout risk? I think I'll just stick with the default & cross my fingers
- and $0.01 for 2.456 GB-Mo storage fee
- I did a sync & listed the archives on hetzner2, and now it lists all the files I'd expect, except the one that keeps failing with timeout issues
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 vault sync deleteMeIn2020 [root@hetzner2 glacier-cli]# ./glacier.py archive list deleteMeIn2020 hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates hetzner1_20171001-052001.fileList.txt.bz2.gpg hetzner1_20171001-052001.tar.gpg hetzner1_20171101-062001.fileList.txt.bz2.gpg hetzner1_20171101-062001.tar.gpg hetzner1_20171201-062001.fileList.txt.bz2.gpg id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]#
- except for the timeout issues, this process appears to work; we have only 5 days to delete all this data, so I'll kick-off the upload for all remaining archives using the script
backupDirs="hetzner2/20170702-052001 hetzner2/20170801-072001 hetzner2/20170901-072001 hetzner2/20171001-072001 hetzner2/20171101-072001 hetzner2/20171202-072001 hetzner2/20180102-072001 hetzner2/20180202-072001 hetzner2/20180302-072001 hetzner2/20180401-072001 hetzner1/20170701-052001 hetzner1/20170801-052001 hetzner1/20180101-062001 hetzner1/20180201-062001 hetzner1/20180301-062002 hetzner1/20180401-052001"
- and the manual re-upload attempt of 'hetzner1_20171201-062001.tar.gpg' that failed with the timeout failed again with timeout; I'll hold-off on the reupload until the other script finishes; hopefully it will just be a small subset of archives that I need to retry
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar.gpg Traceback (most recent call last): File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 736, in <module> main() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 732, in main App().main() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 718, in main self.args.func() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 500, in archive_upload file_obj=self.args.file, description=name) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/vault.py", line 178, in create_archive_from_file writer.write(data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 219, in write self.partitioner.write(data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 61, in write self._send_part() File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 75, in _send_part self.send_fn(part) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 222, in _upload_part self.uploader.upload_part(self.next_part_index, part_data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 129, in upload_part content_range, part_data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/layer1.py", line 637, in upload_part response_headers=response_headers) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/layer1.py", line 84, in make_request raise UnexpectedHTTPResponseError(ok_responses, response) boto.glacier.exceptions.UnexpectedHTTPResponseError: Expected 204, got (408, code=RequestTimeoutException, message=Request timed out.) hancock%
- I just discovered that there's 2x versions of glacier-cli; one was forked from the other https://github.com/basak/glacier-cli/pull/24
- determined that our dreamhost glacier-cli sandbox was checked-out from basak's repo, which was last updated on feb 6. this is the original repo.
hancock% pwd /home/marcin_ose/sandbox/glacier-cli hancock% git remote show origin * remote origin Fetch URL: git://github.com/basak/glacier-cli.git Push URL: git://github.com/basak/glacier-cli.git HEAD branch: master Remote branches: annex-hook-script tracked master tracked Local branch configured for 'git pull': master merges with remote master Local ref configured for 'git push': master pushes to master (up to date) hancock%
- determined that the hetzner2 repo uses the same origin
[root@hetzner2 glacier-cli]# pwd /root/backups/glacierRestore/glacier-cli [root@hetzner2 glacier-cli]# git remote show origin * remote origin Fetch URL: git://github.com/basak/glacier-cli.git Push URL: git://github.com/basak/glacier-cli.git HEAD branch: master Remote branches: annex-hook-script tracked master tracked Local branch configured for 'git pull': master merges with remote master Local ref configured for 'git push': master pushes to master (up to date) [root@hetzner2 glacier-cli]#
- important to note that the 2x repos document different commands for upload.
- the 'basak' repo is more basic
glacier vault list glacier vault create vault-name glacier vault sync [--wait] [--fix] [--max-age hours] vault-name glacier archive list vault-name glacier archive upload [--name archive-name] vault-name filename glacier archive retrieve [--wait] [-o filename] [--multipart-size bytes] vault-name archive-name glacier archive retrieve [--wait] [--multipart-size bytes] vault-name archive-name [archive-name...] glacier archive delete vault-name archive-name glacier job list
- but the 'pkaleta' repo has options for the part size & thread count for the upload
glacier vault list glacier vault create vault-name glacier vault sync [--wait] [--fix] [--max-age hours] vault-name glacier archive list vault-name glacier archive upload [--encrypt] [--concurrent [--part-size size] [--num-threads count]] [--name archive-name] vault-name filename glacier archive retrieve [--wait] [--decrypt] [-o filename] [--part-size bytes] vault-name archive-name [archive-name...] glacier archive delete vault-name archive-name glacier job list
- so if I have horrible timeout issues from the script as-is, I'll try manual re-uploads using the 'pkaleta' repo to see if I have better results
Tue Apr 03, 2018
- Marcin pointed out some 'disk full' sql issues on our wordpress site (why is our site leaking error logs anyway?!?). I accidentally filled the disk when testing the glacier restore (12G down + 12G copying backup file + 12G decrypted + >12G uncompressed .. it adds up!). Because we're using varnish, it didn't take the whole site down, just the cache misses. I deleted my glacier restore files & banned the whole varnish cache for all sites to fix it.
- checked the inventory, but the backups I uploaded last night were not listed yet
[root@hetzner2 glacier-cli]# ./glacier.py archive list deleteMeIn2020 hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]# ./glacier.py -h
- kicked-off a sync; I'll check on this tomorrow
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 vault sync deleteMeIn2020 glacier: queued inventory job for u'deleteMeIn2020' [root@hetzner2 glacier-cli]#
- the upload script output shows a traceback dump due to an exception: boto.glacier.exceptions.UnexpectedHTTPResponseError: Expected 204, got (408, code=RequestTimeoutException, message=Request timed out.)
marcin_ose@hancock:~/backups$ ./uploadToGlacier.sh + backupDirs='hetzner1/20171101-062001 hetzner1/20171201-062001' + syncDir=/home/marcin_ose/backups/uploadToGlacier + encryptionKeyFilePath=/home/marcin_ose/backups/ose-backups-cron.key + export AWS_ACCESS_KEY_ID=CHANGEME + AWS_ACCESS_KEY_ID=CHANGEME + export AWS_SECRET_ACCESS_KEY=CHANGEME + AWS_SECRET_ACCESS_KEY=CHANGEME ++ echo hetzner1/20171101-062001 hetzner1/20171201-062001 + for dir in '$(echo $backupDirs)' ++ echo hetzner1/20171101-062001 ++ tr / _ + archiveName=hetzner1_20171101-062001 ++ date -u --rfc-3339=seconds + timestamp='2018-04-02 18:15:42+00:00' + fileListFilePath=/home/marcin_ose/backups/uploadToGlacier/hetzner1_20171101-062001.fileList.txt + archiveFilePath=/home/marcin_ose/backups/uploadToGlacier/hetzner1_20171101-062001.tar + echo ================================================================================ + echo 'This file is metadata for the archive '\hetzner1_20171101-062001'\. In it, we list all the files included in the compressed/encrypted archive (produced using '\ls -lahR hetzner1/20171101-062001'\), including the files within the tarba lls within the archive (produced using '\find hetzner1/20171101-062001 -type f -exec tar -tvf '\{}'\ \; '\)' + echo '' + echo ' - Michael Altfield <maltfield@opensourceecology.org>' + echo '' + echo ' Note: this file was generated at 2018-04-02 18:15:42+00:00' + echo ================================================================================ + echo '#############################' + echo '# '\ls -lahR'\ output follows #' + echo '#############################' + ls -lahR hetzner1/20171101-062001 + echo ================================================================================ + echo '############################' + echo '# tarball contents follows #' + echo '############################' + find hetzner1/20171201-062001 -type f -exec tar -tvf '{}' ';' + echo ================================================================================ + bzip2 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.fileList.txt + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.fileList.txt.bz2 Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.fileList.txt rm: cannot remove ‘/home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.fileList.txt’: No such file or directory + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.fileList.txt.bz2.gpg + tar -cvf /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar hetzner1/20171201-062001/ hetzner1/20171201-062001/ hetzner1/20171201-062001/public_html/ hetzner1/20171201-062001/public_html/public_html.20171201-062001.tar.bz2 + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar.gpg Traceback (most recent call last): File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 736, in <module> main() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 732, in main App().main() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 718, in main self.args.func() File "/home/marcin_ose/sandbox/glacier-cli/glacier.py", line 500, in archive_upload file_obj=self.args.file, description=name) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/vault.py", line 178, in create_archive_from_file writer.write(data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 219, in write self.partitioner.write(data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 61, in write self._send_part() File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 75, in _send_part self.send_fn(part) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 222, in _upload_part self.uploader.upload_part(self.next_part_index, part_data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/writer.py", line 129, in upload_part content_range, part_data) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/layer1.py", line 637, in upload_part response_headers=response_headers) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/layer1.py", line 84, in make_request raise UnexpectedHTTPResponseError(ok_responses, response) boto.glacier.exceptions.UnexpectedHTTPResponseError: Expected 204, got (408, code=RequestTimeoutException, message=Request timed out.) + rm -rf /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.fileList.txt.bz2 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.fileList.txt.bz2.gpg /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171201-062001.tar.gpg marcin_ose@hancock:~/backups$
- so it looks like all were successful except the last actual data archive for 'hetzner1_20171201-062001.tar.gpg'
- that's pretty awful if we were to loop it. I could add some retry logic, but the dollar cost of accidental uploading it twice is pretty high :\ for now I'll just run the script again for this second upload only for the archive
- I also updated the script to only delete a file if $? was 0 after the upload attempt. Then, if it fails, I can manually trigger the upload after reviewing the script output and the contents of the directory..
Mon Apr 02, 2018
- dreamhost got back to me on the extension. It wasn't Elizabeth, but Jin. Jin said we could have until Apr 9th. So 3 more days. Hopefully I can get all the data onto glacier by then which is 7 days.
- the backup of the 'hetzner1_20171001-052001' backup data issued last night was complete
marcin_ose@hancock:~/backups$ ./uploadToGlacier.sh + backupDirs=hetzner1/20171001-052001 + syncDir=/home/marcin_ose/backups/uploadToGlacier + encryptionKeyFilePath=/home/marcin_ose/backups/ose-backups-cron.key + export AWS_ACCESS_KEY_ID=CHANGEME + AWS_ACCESS_KEY_ID=CHANGEME + export AWS_SECRET_ACCESS_KEY=CHANGEME + AWS_SECRET_ACCESS_KEY=CHANGEME ++ echo hetzner1/20171001-052001 + for dir in '$(echo $backupDirs)' ++ echo hetzner1/20171001-052001 ++ tr / _ + archiveName=hetzner1_20171001-052001 ++ date -u --rfc-3339=seconds + timestamp='2018-04-01 19:44:13+00:00' + fileListFilePath=/home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.fileList.txt + archiveFilePath=/home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.tar + echo ================================================================================ + echo 'This file is metadata for the archive '\hetzner1_20171001-052001'\. In it, we list all the files included in the compressed/encrypted archive (produced using '\ls -lahR hetzner1/20171001-052001'\), including the files within the tarballs within the archive (produced using '\find hetzner1/20171001-052001 -type f -exec tar -tvf '\{}'\ \; '\)' + echo '' + echo ' - Michael Altfield <maltfield@opensourceecology.org>' + echo '' + echo ' Note: this file was generated at 2018-04-01 19:44:13+00:00' + echo ================================================================================ + echo '#############################' + echo '# '\ls -lahR'\ output follows #' + echo '#############################' + ls -lahR hetzner1/20171001-052001 + echo ================================================================================ + echo '############################' + echo '# tarball contents follows #' + echo '############################' + find hetzner1/20171001-052001 -type f -exec tar -tvf '{}' ';' + echo ================================================================================ + bzip2 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.fileList.txt + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.fileList.txt.bz2 Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.fileList.txt rm: cannot remove ‘/home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.fileList.txt’: No such file or directory + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.fileList.txt.bz2.gpg + tar -cvf /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.tar hetzner1/20171001-052001/ hetzner1/20171001-052001/ hetzner1/20171001-052001/public_html/ hetzner1/20171001-052001/public_html/public_html.20171001-052001.tar.bz2 + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.tar Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.tar + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload deleteMeIn2020 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20171001-052001.tar.gpg marcin_ose@hancock:~/backups$
- the inventory still doesn't show the above backups, so I issued a sync
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]# [root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 vault sync deleteMeIn2020 glacier: queued inventory job for u'deleteMeIn2020' [root@hetzner2 glacier-cli]# [root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]#
- the restore of the 'hetzner1_20170901-052001' backup data from Glacier that I kicked off yesterday appears to have finished
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive retrieve --wait deleteMeIn2020 'hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name' 'hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates' [root@hetzner2 glacier-cli]# timed out waiting for input: auto-logout [maltfield@hetzner2 ~]$
- yep, the files got dropped into the cwd with the archive's description as the filename. This should be more sane going forward now that I've updated the uploadToGlacier.sh script to *not* specify a '--name', which means that glacier-cli will set the archive description to the filename
- before I do anything, I'm going to make copies of these files in case my gpg/tar manipulations deletes the originals and I don't want to pay to download them from Glacier again!
[root@hetzner2 glacier-cli]# mkdir ../orig [root@hetzner2 glacier-cli]# cp hetzner1_20170901-052001.fileList.txt.bz2.gpg\:\ this\ is\ a\ metadata\ file\ showing\ the\ file\ and\ dir\ list\ contents\ of\ the\ archive\ of\ the\ same\ prefix\ name ../orig/ [root@hetzner2 glacier-cli]# cp hetzner1_20170901-052001.tar.gpg\:\ this\ is\ an\ encrypted\ tarball\ of\ a\ backup\ from\ our\ ose\ server\ taken\ at\ the\ time\ that\ the\ archive\ description\ prefix\ indicates ../orig [root@hetzner2 glacier-cli]#
- I renamed the other files to just the filename, removing the description starting with the colon (:), and decrypted them
[root@hetzner2 glacierRestore]# gpg --batch --passphrase-file /root/backups/ose-backups-cron.key --output hetzner1_20170901-052001.fileList.txt.bz2 --decrypt hetzner1_20170901-052001.fileList.txt.bz2.gpg gpg: AES encrypted data gpg: encrypted with 1 passphrase [root@hetzner2 glacierRestore]# ls glacier-cli hetzner1_20170901-052001.fileList.txt.bz2.gpg orig hetzner1_20170901-052001.fileList.txt.bz2 hetzner1_20170901-052001.hetzner1_20170901-052001.tar.gpg [root@hetzner2 glacierRestore]# [root@hetzner2 glacierRestore]# ls glacier-cli hetzner1_20170901-052001.fileList.txt.bz2.gpg hetzner1_20170901-052001.tar.gpg hetzner1_20170901-052001.fileList.txt.bz2 hetzner1_20170901-052001.tar orig [root@hetzner2 glacierRestore]#
- extracted the decrypted archive that was downloaded from glacier
[root@hetzner2 glacierRestore]# tar -xf hetzner1_20170901-052001.tar [root@hetzner2 glacierRestore]# [root@hetzner2 glacierRestore]# ls glacier-cli hetzner1_20170901-052001.fileList.txt.bz2 hetzner1_20170901-052001.tar orig hetzner1 hetzner1_20170901-052001.fileList.txt.bz2.gpg hetzner1_20170901-052001.tar.gpg [root@hetzner2 glacierRestore]#
- extracted the compressed public_html contents of the wrapper, uncompressed tarball
[root@hetzner2 public_html]# tar -xjf public_html.20170901-052001.tar.bz2 [root@hetzner2 public_html]# [root@hetzner2 public_html]# du -sh * 12G public_html.20170901-052001.tar.bz2 20G usr [root@hetzner2 public_html]#
- confirmed that I could read the contents of one of the files after the archive was downloaded from glacier, decrypted, and extract. This completes our end-to-end test of a restore from glacier
[root@hetzner2 public_html]# head usr/www/users/osemain/w/README ##### MediaWiki MediaWiki is a popular and free, open-source wiki software package written in PHP. It serves as the platform for Wikipedia and the other projects of the Wikimedia Foundation, which deliver content in over 280 languages to more than half a billion people each month. MediaWiki's reliability and robust feature set have earned it a large and vibrant community of third-party users and developers. MediaWiki is: [root@hetzner2 public_html]#Sun Apr 01, 2018
- the reinventory of vault archive metadata finished last night
root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 vault sync --max-age=0 --wait deleteMeIn2020 [root@hetzner2 glacier-cli]# timed out waiting for input: auto-logout [maltfield@hetzner2 ~]$
- but the contents is still stale! My best guess is that there is some delay after an archive is uploaded before it's even available to be listed in an inventory
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]#
- I logged into the aws console, which shows the deleteMeIn2020 vault as having 7 archives @ a total size of 13.1 GB. This inventory was last updated at "Apr 1, 2018 5:41:24 AM". The help alt text says the timezone is my system's time. So that's much later than I initiated the inventory last night.
- I also checked the aws console billing while I was in. The month just cutover (it's April 1st--but no jokes here), so I can see the entire bill for March came to a total of $0.15.
- unfortunately, the majority of the fee was in request: 2,872 requests for a total of $0.14. The storage fee itself was just $0.01. Therefore, I should probably look into how to configure boto to increase the chunk size for 'glacier-cli' uploads. Even still, the total would be ~20x that for all our backup dumps to glacier = 0.14*20 = $2.8. Ok, that's cheap enough. Even $5 should be fine.
- I tried a refresh again; same result
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]#
- playing around with the glacier-cli command, I came across this job listing
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 job list i/d 2018-04-01T03:07:35.470Z deleteMeIn2020 i/d 2018-03-31T19:47:45.511Z deleteMeIn2020 [root@hetzner2 glacier-cli]#
- I kicked-off another max-age=0 sync while I poke around (might as well--it takes 4 hours!)
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 vault sync --max-age=0 --wait deleteMeIn2020
- then I checked the job list again, and it grew!
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 job list i/p 2018-04-01T13:43:13.901Z deleteMeIn2020 i/d 2018-04-01T03:07:35.470Z deleteMeIn2020 i/d 2018-03-31T19:47:45.511Z deleteMeIn2020 [root@hetzner2 glacier-cli]#
- the top one is the one I just initiated. My best guess is that the 'i' means "Inventory", the 'p' means "inProgress" and the 'd' means "Done"
- hopped over to the aws-cli command to investigate this further
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier list-jobs --account-id - --vault-name deleteMeIn2020 { "JobList": [ { "InventoryRetrievalParameters": { "Format": "JSON" }, "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "Completed": false, "JobId": "7Q31uhwAGJ6TmzC-s4nif9ldkoiZYrMh6V1xHE9QoaMGvGcf0qSp7xo76LtrwsVDE5-CIeW1a3UzwnwCiOcZSngCGV1V", "Action": "InventoryRetrieval", "CreationDate": "2018-04-01T13:43:13.901Z", "StatusCode": "InProgress" }, { "CompletionDate": "2018-04-01T03:07:35.470Z", "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "InventoryRetrievalParameters": { "Format": "JSON" }, "Completed": true, "InventorySizeInBytes": 2232, "JobId": "gHpKcv0KXVmfoMOa_TrqeVzLFAzZzpCdwsJl-9FeEbQHQFr6LEwzspwE6nZqrEi1HgmeDixjtWbw1JciInf5QxHc9dFe", "Action": "InventoryRetrieval", "CreationDate": "2018-03-31T23:21:19.873Z", "StatusMessage": "Succeeded", "StatusCode": "Succeeded" }, { "CompletionDate": "2018-03-31T19:47:45.511Z", "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "InventoryRetrievalParameters": { "Format": "JSON" }, "Completed": true, "InventorySizeInBytes": 2232, "JobId": "5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0", "Action": "InventoryRetrieval", "CreationDate": "2018-03-31T16:01:15.869Z", "StatusMessage": "Succeeded", "StatusCode": "Succeeded" } ] } hancock%
- so that confirms that all 3x jobs are "InventoryRetrieval" jobs. 2x are "Succeeded" and 1x (the one I just initiated) is "InProgress". The finished ones say the size in bytes is '2232' in both cases. That's 2K, which is probably the size of the inventory itself (ie: the metadata report)--not the size of the vault's archives.
- got the output of the oldest inventory job
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id 5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0 output.json { "status": 200, "acceptRanges": "bytes", "contentType": "application/json" } hancock% cat output.json {"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020","InventoryDate":"2018-03-31T15:25:52Z","ArchiveList":[{"ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:35:48Z","Size":380236,"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"},{"ArchiveId":"lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:50:36Z","Size":280709,"SHA256TreeHash":"3f79016e6157ff3e1c9c853337b7a3e7359a9183ae9b26f1d03c1d1c594e45ab"},{"ArchiveId":"fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:53:00Z","Size":280718,"SHA256TreeHash":"6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4"},{"ArchiveId":"zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:55:04Z","Size":1187682789,"SHA256TreeHash":"c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f"},{"ArchiveId":"Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:57:50Z","Size":877058000,"SHA256TreeHash":"fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b"}]}% hancock%
- got the output of the next newest inventory, which I guess is the one I generated last night
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id 'gHpKcv0KXVmfoMOa_TrqeVzLFAzZzpCdwsJl-9FeEbQHQFr6LEwzspwE6nZqrEi1HgmeDixjtWbw1JciInf5QxHc9dFe' output.json { "status": 200, "acceptRanges": "bytes", "contentType": "application/json" } hancock% cat output.json {"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020","InventoryDate":"2018-03-31T15:25:52Z","ArchiveList":[{"ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:35:48Z","Size":380236,"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"},{"ArchiveId":"lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:50:36Z","Size":280709,"SHA256TreeHash":"3f79016e6157ff3e1c9c853337b7a3e7359a9183ae9b26f1d03c1d1c594e45ab"},{"ArchiveId":"fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:53:00Z","Size":280718,"SHA256TreeHash":"6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4"},{"ArchiveId":"zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:55:04Z","Size":1187682789,"SHA256TreeHash":"c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f"},{"ArchiveId":"Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:57:50Z","Size":877058000,"SHA256TreeHash":"fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b"}]}% hancock%
- so both of the most recently completed inventory results show only 5 archives in the deleteMeIn2020 vault that's stale. We want the metadata for the 'hetzner1/20170901-052001' backup so we can test to make sure that a >4G archive restoration works before proceeding with dumping the rest of the bacukups into glacier.
- I sent an email to Elizabeth at Dreahost telling her that we already reduced our usage by ~250G, and I asked for an additional 2 weeks so we could validate our Glacier POC before uploading the data before they delete it.
...
- I biked 40km, ate lunch, and checked again; the inventory was complete & now listed the backups that I uploaded last night = 'hetzner1_20170901-052001'
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 vault sync --max-age=0 --wait deleteMeIn2020 [root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive list deleteMeIn2020 hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name [root@hetzner2 glacier-cli]#
- as nice as glacier-cli is for uploading, the list tries to simplify things (treating archvies as files), so it excludes the actual archive id. And it lacks the hash, size, & creation timestamp. To get this, I got the json job output using the aws-cli
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id '7Q31uhwAGJ6TmzC-s4nif9ldkoiZYrMh6V1xHE9QoaMGvGcf0qSp7xo76LtrwsVDE5-CIeW1a3UzwnwCiOcZSngCGV1V' output.json { "status": 200, "acceptRanges": "bytes", "contentType": "application/json" } hancock% cat output.json {"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020","InventoryDate":"2018-04-01T09:41:24Z","ArchiveList":[{"ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:35:48Z","Size":380236,"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"},{"ArchiveId":"lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:50:36Z","Size":280709,"SHA256TreeHash":"3f79016e6157ff3e1c9c853337b7a3e7359a9183ae9b26f1d03c1d1c594e45ab"},{"ArchiveId":"fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:53:00Z","Size":280718,"SHA256TreeHash":"6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4"},{"ArchiveId":"zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:55:04Z","Size":1187682789,"SHA256TreeHash":"c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f"},{"ArchiveId":"Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:57:50Z","Size":877058000,"SHA256TreeHash":"fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b"},{"ArchiveId":"P9wIGNBbLaAoz7xGht6Y4k7j33nGgPmg0RQ4sesN2tImQLjFN1dtkooVGrBnQqbPt8YhgvwUXv8eO_N72KRjS3RrZQYvkGxAQ9uPcJ-zaDOG8kII7l4p7UzGfaroO63ZreHItIW4GA","ArchiveDescription":"hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name","CreationDate":"2018-03-31T22:46:18Z","Size":2299038,"SHA256TreeHash":"2e789c8c99f08d338f8c1c2440afd76c23f76124c3dbdd33cbfa9f46f5c6b2aa"},{"ArchiveId":"o-naX0m4kQde-2i-8JZbEESi7r8OlFjIoDjgbQSXT_zt9L_e7qOH3HQ1R7ViQC3i7M0lVLbODsGZm9w9HfI3tHYKb2R1T_WWBwMxFuC_OhYiPX8uepTvvBg2Mg6KysP9H3zNzwGSZw","ArchiveDescription":"hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates","CreationDate":"2018-03-31T23:47:51Z","Size":12009829896,"SHA256TreeHash":"022f088abcfadefe7df5ac770f45f315ddee708f2470133ebd027ce988e1a45d"}]}% hancock%
- so I want to restore these 2x archives:
- P9wIGNBbLaAoz7xGht6Y4k7j33nGgPmg0RQ4sesN2tImQLjFN1dtkooVGrBnQqbPt8YhgvwUXv8eO_N72KRjS3RrZQYvkGxAQ9uPcJ-zaDOG8kII7l4p7UzGfaroO63ZreHItIW4GA
- o-naX0m4kQde-2i-8JZbEESi7r8OlFjIoDjgbQSXT_zt9L_e7qOH3HQ1R7ViQC3i7M0lVLbODsGZm9w9HfI3tHYKb2R1T_WWBwMxFuC_OhYiPX8uepTvvBg2Mg6KysP9H3zNzwGSZw
- unfortunately, it appears that the 'glacier-cli' tool also doesn't allow you to specify the id for restoring. I think I'm going to have to just make the archive description a filename at upload to play nice with glacier-cli. in the meantime, I'll use my insane "name" which is actually a human-readable description for the archive (as amazon intended for this field)
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive retrieve --wait deleteMeIn2020 'P9wIGNBbLaAoz7xGht6Y4k7j33nGgPmg0RQ4sesN2tImQLjFN1dtkooVGrBnQqbPt8YhgvwUXv8eO_N72KRjS3RrZQYvkGxAQ9uPcJ-zaDOG8kII7l4p7UzGfaroO63ZreHItIW4GA' 'o-naX0m4kQde-2i-8JZbEESi7r8OlFjIoDjgbQSXT_zt9L_e7qOH3HQ1R7ViQC3i7M0lVLbODsGZm9w9HfI3tHYKb2R1T_WWBwMxFuC_OhYiPX8uepTvvBg2Mg6KysP9H3zNzwGSZw' glacier: archive 'P9wIGNBbLaAoz7xGht6Y4k7j33nGgPmg0RQ4sesN2tImQLjFN1dtkooVGrBnQqbPt8YhgvwUXv8eO_N72KRjS3RrZQYvkGxAQ9uPcJ-zaDOG8kII7l4p7UzGfaroO63ZreHItIW4GA' not found [root@hetzner2 glacier-cli]#
- but first I tried a substring; that failed too
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive retrieve --wait deleteMeIn2020 'hetzner1_20170901-052001.fileList.txt.bz2.gpg' 'hetzner1_20170901-052001.tar.gpg' glacier: archive 'hetzner1_20170901-052001.fileList.txt.bz2.gpg' not found [root@hetzner2 glacier-cli]#
- ok, using the whole description as the name
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 archive retrieve --wait deleteMeIn2020 'hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name' 'hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates'
- that command appears to be waiting; I'll check on that tomorrow
- in the meantime, on the dreamhost server, I'll fix the script to just use the filename as the description & upload the next archive using this setting
- note that it's a new month, so dreamhost has our first-of-the-month backups for 2018-04-01. Therefore, here's the updated list I'll be sending to this deleteMeIn2020 vault (total 276G)
20G hetzner2/20170702-052001 1.7G hetzner2/20170801-072001 1.7G hetzner2/20170901-072001 2.5G hetzner2/20171001-072001 838M hetzner2/20171101-072001 997M hetzner2/20171202-072001 1.1G hetzner2/20180102-072001 14G hetzner2/20180202-072001 25G hetzner2/20180302-072001 2.8G hetzner2/20180401-072001 39G hetzner1/20170701-052001 39G hetzner1/20170801-052001 12G hetzner1/20170901-052001 12G hetzner1/20171001-052001 12G hetzner1/20171101-062001 12G hetzner1/20171201-062001 12G hetzner1/20180101-062001 27G hetzner1/20180201-062001 28G hetzner1/20180301-062002 12G hetzner1/20180401-052001
- the last backup was ' hetzner1/20170901-052001', so for this test I'll use the following month that's the same size (12G, which is our smallest size that exceeds the 4G limit) = 'hetzner1/20171001-052001'
- I added a line to delete the contents of the 'uploadToGlacier/' directory after the upload to glacier
- I updated the uploadToGlacier.sh script to attempt to upload the next 2x backups by setting 'backupDirs="hetzner1/20171101-062001 hetzner1/20171201-062001"'
- I kicked-off the uploadToGlacier.sh script. If the next sync in ~48 hours shows all 3x backups using the description as the file name (per glacier-cli's desire), then I think I can execute this script for the remaining backups.