Maltfield Log/2018 Q1
My work log from the year 2018 Quarter 1. I intentionally made this verbose to make future admin's work easier when troubleshooting. The more keywords, error messages, etc that are listed in this log, the more helpful it will be for the future OSE Sysadmin.
Contents
- 1 See Also
- 2 Sat Mar 31, 2018
- 3 Fri Mar 30, 2018
- 4 Thr Mar 29, 2018
- 5 Mon Mar 26, 2018
- 6 Fri Mar 23, 2018
- 7 Wed Mar 21, 2018
- 8 Tue Mar 20, 2018
- 9 Fri Mar 16, 2018
- 10 Thr Mar 15, 2018
- 11 Mon Mar 12, 2018
- 12 Tue Mar 06, 2018
- 13 Mon Mar 05, 2018
- 14 Sun Mar 04, 2018
- 15 Sat Mar 03, 2018
- 16 Fri Mar 02, 2018
- 17 Thr Mar 01, 2018
- 18 Tue Feb 27, 2018
- 19 Sun Feb 25, 2018
- 20 Sat Feb 24, 2018
- 21 Tue Feb 20, 2018
- 22 Mon Feb 19, 2018
- 23 Sun Feb 18, 2018
- 24 Thr Feb 15, 2018
- 25 Wed Feb 14, 2018
- 26 Tue Feb 13, 2018
- 27 Mon Feb 12, 2018
- 28 Thr Feb 09, 2018
- 29 Thr Feb 08, 2018
- 30 Wed Feb 07, 2018
- 31 Tue Feb 06, 2018
- 32 Mon Feb 05, 2018
- 33 Sun Feb 04, 2018
- 34 Thr Feb 01, 2018
- 35 Wed Jan 31, 2018
- 36 Thr Jan 25, 2018
- 37 Tue Jan 23, 2018
- 38 Mon Jan 22, 2018
- 39 Sat Jan 20, 2018
- 40 Thr Jan 18, 2018
- 41 Sat Jan 13, 2018
- 42 Fri Jan 12, 2018
- 43 Mon Jan 08, 2018
- 44 Fri Jan 05, 2018
- 45 Thr Jan 04, 2018
- 46 Wed Jan 03, 2018
- 47 Tue Jan 02, 2018
See Also
Sat Mar 31, 2018
- the run from last night finished; here's the output:
hancock% ./uploadToGlacier.sh tar: This does not look like a tar archive tar: Skipping to next header tar: Exiting with failure status due to previous errors Reading passphrase from file descriptor 3 rm: cannot remove ‘/home/marcin_ose/backups/uploadToGlacier/hetzner2_20171101-072001.fileList.txt’: No such file or directory Error parsing parameter '--body': Blob values must be a path to a file. { "archiveId": "fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw", "checksum": "6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4", "location": "/099400651767/vaults/deleteMeIn2020/archives/fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCp YkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw" } hetzner2/20171101-072001/ hetzner2/20171101-072001/etc/ hetzner2/20171101-072001/etc/etc.20171101-072001.tar.gz hetzner2/20171101-072001/home/ hetzner2/20171101-072001/home/home.20171101-072001.tar.gz hetzner2/20171101-072001/log/ hetzner2/20171101-072001/log/log.20171101-072001.tar.gz hetzner2/20171101-072001/mysqldump/ hetzner2/20171101-072001/mysqldump/mysqldump.20171101-072001.sql.gz hetzner2/20171101-072001/root/ hetzner2/20171101-072001/root/root.20171101-072001.tar.gz hetzner2/20171101-072001/www/ hetzner2/20171101-072001/www/www.20171101-072001.tar.gz Reading passphrase from file descriptor 3 Reading passphrase from file descriptor 3 { "archiveId": "zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA", "checksum": "c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f", "location": "/099400651767/vaults/deleteMeIn2020/archives/zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA" } { "archiveId": "Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw", "checksum": "fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b", "location": "/099400651767/vaults/deleteMeIn2020/archives/Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw" } hancock%
- logged into the aws console to check the status of the Glacier uploads I did last night. It nows shows 1 archive, though there should be a few. It shows the "Inventory Last Updated" at "Mar 31, 2018 5:22:27 AM"
- I checked our billing page in the aws console, and it now shows $0.01 due to 5 requests to Glacier in usw2. That fee is $0.050 per 1,000 requests.
- did some research on the glacier docs, and found that archives apparently don't have names; they're given a 138-byte uid by glacier. Therefore, I should include the name of the backup (which includes the very important timestamp that the backup was taken & the server it was taken on) in the description. I also learned that I can't force an inventory to be taken of a vault, and that ti's done "approximately daily" https://docs.aws.amazon.com/amazonglacier/latest/dev/working-with-archives.html#client-side-key-map-concept
Except for the optional archive description, Amazon Glacier does not support any additional metadata for the archives. When you upload an archive Amazon Glacier assigns an ID, an opaque sequence of characters, from which you cannot infer any meaning about the archive. You might maintain metadata about the archives on the client-side. The metadata can include archive name and some other meaningful information about the archive.
Note: If you are an Amazon Simple Storage Service (Amazon S3) customer, you know that when you upload an object to a bucket, you can assign the object an object key such as MyDocument.txt or SomePhoto.jpg. In Amazon Glacier, you cannot assign a key name to the archives you upload.
If you maintain client-side archive metadata, note that Amazon Glacier maintains a vault inventory that includes archive IDs and any descriptions you provided during the archive upload. You might occasionally download the vault inventory to reconcile any issues in your client-side database you maintain for the archive metadata. However, Amazon Glacier takes vault inventory approximately daily. When you request a vault inventory, Amazon Glacier returns the last inventory it prepared, a point in time snapshot.
- there is no way to list all archvies. Archive operations are: upload, download, and delete. No list!
- there is a 'describe-vault' argument, which gives apparently more up-to-date results than the aws console. Here it shows we have 5 archives, whereas the console still shows 1 archive.
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier describe-vault --account-id - --vault-name deleteMeIn2020 { "SizeInBytes": 2065846292, "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "LastInventoryDate": "2018-03-31T15:25:52.860Z", "NumberOfArchives": 5, "CreationDate": "2018-03-29T21:36:06.041Z", "VaultName": "deleteMeIn2020" } hancock%
- I clicked the 'refresh" circle button in the aws console in the glacier Service, and it updated to list 5 archives & a size of 1.9 GB for the deleteMeIn2020 vault. It does not appear to show the archives ids & descriptions only the number & total size.
- for future reference, here's the objects that I have documented uploading
- qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA
- ??
- fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw
- zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA
- Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw
- I also read that you cannot use the upload method I used yesterday for files >4G. Instead, you have to use "multipart upload" for archives >4G (which is most of our backups). https://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
Options for Uploading an Archive to Amazon Glacier
Depending on the size of the data you are uploading, Amazon Glacier offers the following options:
Upload archives in a single operation In a single operation, you can upload archives from 1 byte to up to 4 GB in size. However, we encourage Amazon Glacier customers to use multipart upload to upload archives greater than 100 MB. For more information, see Uploading an Archive in a Single Operation.
Upload archives in parts Using the multipart upload API, you can upload large archives, up to about 40,000 GB (10,000 * 4 GB).
The multipart upload API call is designed to improve the upload experience for larger archives. You can upload archives in parts. These parts can be uploaded independently, in any order, and in parallel. If a part upload fails, you only need to upload that part again and not the entire archive. You can use multipart upload for archives from 1 byte to about 40,000 GB in size. For more information, see Uploading Large Archives in Parts (Multipart Upload).
- found more info on multipart uploads here, but I'll need to test this https://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-archive-mpu.html
- apparently the way to view the contents of a vault is to use the initiate-job subcommand with inventory-retrieval type https://docs.aws.amazon.com/cli/latest/reference/glacier/initiate-job.html
- I initiated a inventory-retrieval job for the deleteMeIn2020 vault
hancock% aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}' zsh: command not found: aws hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier initiate-job --account-id - --vault-name deleteMeIn2020 --job-parameters '{"Type": "inventory-retrieval"}' { "location": "/099400651767/vaults/deleteMeIn2020/jobs/5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0", "jobId": "5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0" } hancock%
- and a list-jobs shows the job is InProgress
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier list-jobs --account-id - --vault-name deleteMeIn2020 { "JobList": [ { "InventoryRetrievalParameters": { "Format": "JSON" }, "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "Completed": false, "JobId": "5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0", "Action": "InventoryRetrieval", "CreationDate": "2018-03-31T16:01:15.869Z", "StatusCode": "InProgress" } ] } hancock%
- updated the script to uploadToGlacier.sh script to include the $archiveName in the archive-description, since there is no archive name--it's just identified by a uid.
- I've read that if we use a lifecycle rule in s3 to offload to glacier, then the metadata stays in s3, making retrievals much easier
- this could take up to 4 hours. when the job is available, I think it will only be available for 24 hours. I got the command to download it, and got an error that it's not ready for download yet
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id '5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0' output.json An error occurred (InvalidParameterValueException) when calling the GetJobOutput operation: The job is not currently available for download: 5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0 hancock%
- also tested describe-job, which showed the same output as list-jobs, though not in an array
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier describe-job --account-id - --vault-name deleteMeIn2020 --job-id '5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0' { "InventoryRetrievalParameters": { "Format": "JSON" }, "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "Completed": false, "JobId": "5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0", "Action": "InventoryRetrieval", "CreationDate": "2018-03-31T16:01:15.869Z", "StatusCode": "InProgress" } hancock%
- there was no good way to wait on the job & automatically download the output, so I wrote a simple script. I think this should take >4 hours
hancock% cat getGlacierJob.sh #!/bin/bash ############ # SETTINGS # ############ jobId='5COrR-wLYeA8ZTyhlBI50Pq4Egnx5G11OmTZ2lVwpuuJTgdvwbEeC1rY1dzST0fCPRm1-D_pvHH5wyg1fJpIhgHJ4ii0' ################# # QUERY FOR JOB # ################# loop=true; while $loop -eq true; do echo "INFO: attempting to get job status" date jobStatus=`/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier describe-job --account-id - --vault-name deleteMeIn2020 --job-id "${jobId}" | grep "StatusCode" | cut -d: -f2 | xargs echo -n` if grep "InProgress"` ; then # job is still InProgress; wait 30 minutes before querying again echo "INFO: job status was InProgress; sleeping for 30 minutes before trying again" sleep 1800 else # job is not InProgress; exit this wait loop so we can get the output echo "INFO: job status was not InProgress! Exiting wait loop." loop=false; fi echo '' done ################## # GET JOB OUTPUT # ################## /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier get-job-output --account-id - --vault-name deleteMeIn2020 --job-id "${jobId}" output.json exit 0 hancock%
- when this finally finished, the output was just a status = 200
hancock% ./getGlacierJob.sh ... INFO: attempting to get job status Sat Mar 31 13:08:23 PDT 2018 INFO: job status was not InProgress! Exiting wait loop. { "status": 200, "acceptRanges": "bytes", "contentType": "application/json" } hancock%
- and the output stored to a json file:
hancock% cat output.json {"VaultARN":"arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020","InventoryDate":"2018-03-31T15:25:52Z","ArchiveList":[{"ArchiveId":"qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:35:48Z","Size":380236,"SHA256TreeHash":"a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b"},{"ArchiveId":"lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:50:36Z","Size":280709,"SHA256TreeHash":"3f79016e6157ff3e1c9c853337b7a3e7359a9183ae9b26f1d03c1d1c594e45ab"},{"ArchiveId":"fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw","ArchiveDescription":"this is a metadata file showing the file and dir list contents of the archive of the same name","CreationDate":"2018-03-31T02:53:00Z","Size":280718,"SHA256TreeHash":"6ba4c8a93163b2d3978ae2d87f26c5ad571330ecaa9da3b6161b95074558cef4"},{"ArchiveId":"zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:55:04Z","Size":1187682789,"SHA256TreeHash":"c90c696931ed1dc7cd587dc1820ddb0567a4835bd46db76c9a326215d9950c8f"},{"ArchiveId":"Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw","ArchiveDescription":"this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates","CreationDate":"2018-03-31T02:57:50Z","Size":877058000,"SHA256TreeHash":"fdefdad19e585df8324ed25f2f52f7d98bcc368929f84dafa9a4462333af095b"}]}% hancock%
- while I'm waiting for the inventory, I decided to dig into how to upload archives >4G
- the smallest backup we have that's >4G is 12G. I'll test with this 12G backup: 'hetzner1/20170901-052001'
- I updated the uploadToGlacier.sh script with the -x builtin & changed backupDirs to = 'hetzner1/20170901-052001'. I ran it, expecting it to fail because it would attempt to upload a file >4G. When/if it fails, I'll have the .gpg file prepared & can attempt to manually upload it
- this failed as follows
hancock% ./uploadToGlacier.sh + backupDirs=hetzner1/20170901-052001 + syncDir=/home/marcin_ose/backups/uploadToGlacier + encryptionKeyFilePath=/home/marcin_ose/backups/ose-backups-cron.key ++ echo hetzner1/20170901-052001 + for dir in '$(echo $backupDirs)' ++ echo hetzner1/20170901-052001 ++ tr / _ + archiveName=hetzner1_20170901-052001 ++ date -u --rfc-3339=seconds + timestamp='2018-03-31 19:08:48+00:00' + fileListFilePath=/home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt + archiveFilePath=/home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar.gz + echo ================================================================================ + echo 'This file is metadata for the archive '\hetzner1_20170901-052001'\. In it, we list all the files included in the compressed/encrypted archive (produced using '\ls -lahR hetzner1/20170901-052001'\), including the files within the tarballs within the archive (produced using '\find hetzner1/20170901-052001 -type f -exec tar -tvf '\{}'\ \; '\)' + echo '' + echo ' - Michael Altfield <maltfield@opensourceecology.org>' + echo '' + echo ' Note: this file was generated at 2018-03-31 19:08:48+00:00' + echo ================================================================================ + echo '#############################' + echo '# '\ls -lahR'\ output follows #' + echo '#############################' + ls -lahR hetzner1/20170901-052001 + echo ================================================================================ + echo '############################' + echo '# tarball contents follows #' + echo '############################' + find hetzner1/20170901-052001 -type f -exec tar -tvf '{}' ';' + echo ================================================================================ + bzip2 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt.bz2 Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt rm: cannot remove ‘/home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt’: No such file or directory + tar -czvf /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar.gz hetzner1/20170901-052001/ hetzner1/20170901-052001/ hetzner1/20170901-052001/public_html/ hetzner1/20170901-052001/public_html/public_html.20170901-052001.tar.bz2 + gpg --symmetric --cipher-algo aes --armor --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar.gz Reading passphrase from file descriptor 3 + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar.gz Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar.gz + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name deleteMeIn2020 --archive-description 'hetzner1_20170901-052001: this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates' --body /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar.gz.gpg An error occurred (InvalidParameterValueException) when calling the UploadArchive operation: Invalid Content-Length: 11989358711 hancock%
- it's also important to note the size differences between encrypting the archive using ascii-armor (16G) vs not (12G). We should def skip '--armor' if it doesn't break anything
hancock% while true; do date; du -sh uploadToGlacier/*; sleep 300; echo; done Sat Mar 31 13:51:20 PDT 2018 2.2M uploadToGlacier/hetzner1_20170901-052001.fileList.txt.bz2 2.2M uploadToGlacier/hetzner1_20170901-052001.fileList.txt.bz2.gpg 16G uploadToGlacier/hetzner1_20170901-052001.tar.gz.asc 12G uploadToGlacier/hetzner1_20170901-052001.tar.gz.gpg 276K uploadToGlacier/hetzner2_20171101-072001.fileList.txt.bz2 276K uploadToGlacier/hetzner2_20171101-072001.fileList.txt.bz2.gpg 1.2G uploadToGlacier/hetzner2_20171101-072001.tar.gz.asc 837M uploadToGlacier/hetzner2_20171101-072001.tar.gz.gpg hancock%
- I found 4 relevant subcommands for multiplart uploads
- list-multipart-uploads
- initiate-multipart-upload
- upload-multipart-part
- complete-multipart-upload
- tested list-multipart-uploads
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier list-multipart-uploads --account-id - --vault-name deleteMeIn2020 { "UploadsList": [] } hancock%
- initiate-multipart-upload appears to just return a multipart upload resource ID, which is needed to be specified when uploading the parts using upload-multipart-part
- it looks like aws-cli will *not* split our file into parts; we have to do that. I found an official aws guide that suggests using the `split` command, which I confirmed is present on hancock https://docs.aws.amazon.com/cli/latest/userguide/cli-using-glacier.html
hancock% split --help Usage: split [OPTION]... [INPUT [PREFIX]] Output fixed-size pieces of INPUT to PREFIXaa, PREFIXab, ...; default size is 1000 lines, and default PREFIX is 'x'. With no INPUT, or when INPUT is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -a, --suffix-length=N generate suffixes of length N (default 2) --additional-suffix=SUFFIX append an additional SUFFIX to file names. -b, --bytes=SIZE put SIZE bytes per output file -C, --line-bytes=SIZE put at most SIZE bytes of lines per output file -d, --numeric-suffixes[=FROM] use numeric suffixes instead of alphabetic. FROM changes the start value (default 0). -e, --elide-empty-files do not generate empty output files with '-n' --filter=COMMAND write to shell COMMAND; file name is $FILE -l, --lines=NUMBER put NUMBER lines per output file -n, --number=CHUNKS generate CHUNKS output files. See below -u, --unbuffered immediately copy input to output with '-n r/...' --verbose print a diagnostic just before each output file is opened --help display this help and exit --version output version information and exit SIZE is an integer and optional unit (example: 10M is 10*1024*1024). Units are K, M, G, T, P, E, Z, Y (powers of 1024) or KB, MB, ... (powers of 1000). CHUNKS may be: N split into N files based on size of input K/N output Kth of N to stdout l/N split into N files without splitting lines l/K/N output Kth of N to stdout without splitting lines r/N like 'l' but use round robin distribution r/K/N likewise but only output Kth of N to stdout Report split bugs to bug-coreutils@gnu.org GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> For complete documentation, run: info coreutils 'split invocation' hancock%
- the document also describes how to generate the treehash using openssl & cat. It's documented, but not trivial. And the iteration process is so slow, it may be better to use a 3rd party tool that does this.
- the best tool from previous research appears to be glacier-cli, so I'll attempt to install it on hancock
cd mkdir sandbox cd sandbox git clone git://github.com/basak/glacier-cli.git cd glacier-cli
- execution directly fails with a missing iso8601 module
hancock% ./glacier.py Traceback (most recent call last): File "./glacier.py", line 37, in <module> import iso8601 ImportError: No module named iso8601 hancock%
- execution with the python binary using aws-cli also fails with a missing boto library
hancock% /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py Traceback (most recent call last): File "./glacier.py", line 36, in <module> import boto.glacier ImportError: No module named boto.glacier hancock%
- this was shocking; I'd expect aws-cli to use boto! I did find it, but it's called botocore
hancock% find /home/marcin_ose | grep -i boto ... /home/marcin_ose/.local/lib/aws/lib/python2.7/site-packages/botocore ... hancock%
- I went to the boto github & followed their pip install instructions, but it said it's already installed https://github.com/boto/boto/
hancock% pip install boto Requirement already satisfied (use --upgrade to upgrade): boto in /usr/lib/python2.7/dist-packages Cleaning up... hancock%
- checking the sourcecode of glacier-cli shows that the boto import occurs before the iso8601 import, so probably we're better off using pip to get that for the system-python, if possible
hancock% grep -C 2 'boto.glacier' glacier.py | head -n 4 import time import boto.glacier import iso8601 hancock%
- attempted to install iso8601 with pip, but zsh killed it for "excessive resource usage" ??
hancock% pip install ios8601 Downloading/unpacking ios8601 Yikes! One of your processes (pip, pid 10498) was just killed for excessive resource usage. Please contact DreamHost Support for details. zsh: killed pip install ios8601 hancock%
- attempted to install using bash; same result
hancock% bash marcin_ose@hancock:~/sandbox/glacier-cli$ pip install ios8601 Downloading/unpacking ios8601 Yikes! One of your processes (pip, pid 11572) was just killed for excessive resource usage. Please contact DreamHost Support for details. Killed marcin_ose@hancock:~/sandbox/glacier-cli$
- attempted to install again, prefixing with nice. same result :(
marcin_ose@hancock:~/sandbox/glacier-cli$ nice pip install ios8601 Downloading/unpacking ios8601 Yikes! One of your processes (pip, pid 12348) was just killed for excessive resource usage. Please contact DreamHost Support for details. Killed marcin_ose@hancock:~/sandbox/glacier-cli$
- got a list of the PATH dirs for the built-in python
marcin_ose@hancock:~/sandbox/glacier-cli$ python Python 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> print sys.path ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/pymodules/python2.7'] >>> marcin_ose@hancock:~/sandbox/glacier-cli$
- got a list of the PATH dirs for the aws-cli python
marcin_ose@hancock:~/sandbox/glacier-cli$ /home/marcin_ose/.local/lib/aws/bin/python Python 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> print sys.path; ['', '/home/marcin_ose/.local/lib/aws/lib/python2.7', '/home/marcin_ose/.local/lib/aws/lib/python2.7/plat-x86_64-linux-gnu', '/home/marcin_ose/.local/lib/aws/lib/python2.7/lib-tk', '/home/marcin_ose/.local/lib/aws/lib/python2.7/lib-old', '/home/marcin_ose/.local/lib/aws/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages', '/home/marcin_ose/.local/lib/aws/lib/python2.7/site-packages'] >>> marcin_ose@hancock:~/sandbox/glacier-cli$
- found boto in /usr/lib/python2.7/dist-packages/boto/
marcin_ose@hancock:~/sandbox/glacier-cli$ find /usr/lib | grep -i boto ... /usr/lib/python2.7/dist-packages/boto ... marcin_ose@hancock:~/sandbox/glacier-cli$
- copied the directory from the system library dir into our user-specific library dir
marcin_ose@hancock:~/sandbox/glacier-cli$ cp -r /usr/lib/python2.7/dist-packages/boto /home/marcin_ose/.local/lib/aws/lib/python2.7/site-packages/ marcin_ose@hancock:~/sandbox/glacier-cli$
- better; now the local one fails on iso8601
hancock% /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py Traceback (most recent call last): File "./glacier.py", line 37, in <module> import iso8601 ImportError: No module named iso8601 hancock%
- fuck pip on this box without root. trying from source
cd mkdir src cd src wget "https://pypi.python.org/packages/45/13/3db24895497345fb44c4248c08b16da34a9eb02643cea2754b21b5ed08b0/iso8601-0.1.12.tar.gz" tar -xzvf iso8601-0.1.12.tar.gz cd iso8601-0.1.12
- attempted to install
hancock% /home/marcin_ose/.local/lib/aws/bin/python setup.py install running install running bdist_egg running egg_info writing iso8601.egg-info/PKG-INFO writing top-level names to iso8601.egg-info/top_level.txt writing dependency_links to iso8601.egg-info/dependency_links.txt reading manifest file 'iso8601.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'iso8601.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build creating build/lib.linux-x86_64-2.7 creating build/lib.linux-x86_64-2.7/iso8601 copying iso8601/init.py -> build/lib.linux-x86_64-2.7/iso8601 copying iso8601/iso8601.py -> build/lib.linux-x86_64-2.7/iso8601 copying iso8601/test_iso8601.py -> build/lib.linux-x86_64-2.7/iso8601 creating build/bdist.linux-x86_64 creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/iso8601 copying build/lib.linux-x86_64-2.7/iso8601/init.py -> build/bdist.linux-x86_64/egg/iso8601 copying build/lib.linux-x86_64-2.7/iso8601/iso8601.py -> build/bdist.linux-x86_64/egg/iso8601 copying build/lib.linux-x86_64-2.7/iso8601/test_iso8601.py -> build/bdist.linux-x86_64/egg/iso8601 byte-compiling build/bdist.linux-x86_64/egg/iso8601/init.py to init.pyc byte-compiling build/bdist.linux-x86_64/egg/iso8601/iso8601.py to iso8601.pyc byte-compiling build/bdist.linux-x86_64/egg/iso8601/test_iso8601.py to test_iso8601.pyc creating build/bdist.linux-x86_64/egg/EGG-INFO copying iso8601.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying iso8601.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying iso8601.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying iso8601.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO zip_safe flag not set; analyzing archive contents... creating dist creating 'dist/iso8601-0.1.12-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing iso8601-0.1.12-py2.7.egg Copying iso8601-0.1.12-py2.7.egg to /home/marcin_ose/.local/lib/aws/lib/python2.7/site-packages Adding iso8601 0.1.12 to easy-install.pth file Installed /home/marcin_ose/.local/lib/aws/lib/python2.7/site-packages/iso8601-0.1.12-py2.7.egg Processing dependencies for iso8601==0.1.12 Finished processing dependencies for iso8601==0.1.12 hancock%
- that worked, next fail @ sqlalchemy
marcin_ose@hancock:~/sandbox/glacier-cli$ /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py Traceback (most recent call last): File "./glacier.py", line 38, in <module> import sqlalchemy ImportError: No module named sqlalchemy marcin_ose@hancock:~/sandbox/glacier-cli$
- followed the same procedure for sqlalchemy
wget "https://pypi.python.org/packages/da/ef/f10a6892f8ff3c1fec1c25699a7379d1f72f291c8fa40b71c31cab3f779e/SQLAlchemy-1.2.6.tar.gz" tar -xzvf SQLAlchemy-1.2.6.tar.gz cd SQLAlchemy-1.2.6 /home/marcin_ose/.local/lib/aws/bin/python setup.py install
- that worked!
marcin_ose@hancock:~/sandbox/glacier-cli$ /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py usage: glacier.py [-h] [--region REGION] {vault,archive,job} ... glacier.py: error: too few arguments marcin_ose@hancock:~/sandbox/glacier-cli$
- glacier-cli uses a client-side metadata cache for the archives to solve the issues I encountered above when doing this shit manually, so the first step is to initiate a sync. Unfortunately, that failed; it needs my access keys & stuff
marcin_ose@hancock:~/sandbox/glacier-cli$ /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py vault sync --wait deleteMeIn2020 Traceback (most recent call last): File "./glacier.py", line 736, in <module> main() File "./glacier.py", line 732, in main App().main() File "./glacier.py", line 707, in init connection = boto.glacier.connect_to_region(args.region) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/init.py", line 59, in connect_to_region return region.connect(**kw_params) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/regioninfo.py", line 63, in connect return self.connection_cls(region=self, **kw_params) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/layer2.py", line 38, in init self.layer1 = Layer1(*args, **kwargs) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/glacier/layer1.py", line 63, in init suppress_consec_slashes) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/connection.py", line 559, in init host, config, self.provider, self._required_auth_capability()) File "/home/marcin_ose/.local/lib/aws/local/lib/python2.7/site-packages/boto/auth.py", line 732, in get_auth_handler 'Check your credentials' % (len(names), str(names))) boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials marcin_ose@hancock:~/sandbox/glacier-cli$
- there is no documentation on how to add creds in the glacier-cli project, but an issue ticket suggests to use env vars. boto suggests that it can pull from my ~/.aws/credentials file. I tried setting the env vars too, but got the same result https://boto3.readthedocs.io/en/latest/guide/configuration.html
- err, I missed the 'export' prefix for the vars. adding that worked
marcin_ose@hancock:~/sandbox/glacier-cli$ export AWS_ACCESS_KEY_ID='CHANGEME' marcin_ose@hancock:~/sandbox/glacier-cli$ export AWS_SECRET_ACCESS_KEY='CHANGEME' marcin_ose@hancock:~/sandbox/glacier-cli$ /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py --region us-west-2 vault sync --wait deleteMeIn2020 marcin_ose@hancock:~/sandbox/glacier-cli$ /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py --region us-west-2 vault list deleteMeIn2020 marcin_ose@hancock:~/sandbox/glacier-cli$
- this is fucking awesome! the archive list works wonders, and stores it locally
marcin_ose@hancock:~/sandbox/glacier-cli$ /home/marcin_ose/.local/lib/aws/bin/python ./glacier.py archive list deleteMeIn2020 id:zr-OjFat_oTJ4k_bMRdczuqDL_GNBpbgTVcHYSg6N-vTWvCe9FNgxJXrFeT26eL2LiXMEpijzaretHvFdyFYQarfZZzcFr0GEEB2O4rVEjtslkGuhbHfWMIGFbQZXQgmjE9aKl4EpA this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:Rb3XtAMEDXlx4KSEZ_-OdA121VJ4jHPEPHIGr33GUJ7wbixaxIzSa5gXV-2i_7-AH-_KUCuLMQbmMPxRN7an7xmMr3PHlzdZMXQj1YTFlJC0g2BT2_F1HJf8h6IocDcR-7EJQeFTqw this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates id:qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA this is a metadata file showing the file and dir list contents of the archive of the same name id:lEJNkWsTF-zZ1fj_2XDVrgbTFGhthkMo0FsLyCb7EM18JrQ-SimUAhAi7HtkrTZMT-wuYSDupFGDVzh87cZlzxRXrex_9NHtTkQyp93A2gICb9zOLDViUr8gHJO6AcyN-R9j2yiIDw this is a metadata file showing the file and dir list contents of the archive of the same name id:fOeCrDHiQUrbvZoyT-jkSQH_euCAhtRcy8wetvONgUWyJBYzxM7AMmbc4YJzRuroL57hVmIUDQRHS-deAo3WG0esgBU52W2qes-47L1-VkczCpYkeGQjlNFGXaKE7ZeZ6jgZ3hBnpw this is a metadata file showing the file and dir list contents of the archive of the same name marcin_ose@hancock:~/sandbox/glacier-cli$
- the archive metadata itself is stored locally in ~/.cache/glacier-cli/db. It's only a few K.
marcin_ose@hancock:~/sandbox/glacier-cli$ ls -lah ~/.cache/glacier-cli/db -rw-r--r-- 1 marcin_ose pg1589252 5.0K Mar 31 14:19 /home/marcin_ose/.cache/glacier-cli/db
- removed the gz compression of the tarball creation for the archive; it shouldn't buy us anything for these files that are already compressed..
- updated the uploadToGlacier.sh script to use glacer-cli
- ran the uploadToGlacier.sh script again; hopefully this should fully do the upload for 'hetzner1/20170901-052001'
- while I waited for the uploadToGlacier.sh script to finish for 'hetzner1/20170901-052001', I began to document why we don't use provisioning tool on our wiki (config rot) http://opensourceecology.org/wiki/OSE_Server#Provisioning
- the uploadToGlacier.sh script finished
marcin_ose@hancock:~/backups$ ./uploadToGlacier.sh + backupDirs=hetzner1/20170901-052001 + syncDir=/home/marcin_ose/backups/uploadToGlacier + encryptionKeyFilePath=/home/marcin_ose/backups/ose-backups-cron.key ++ echo hetzner1/20170901-052001 + for dir in '$(echo $backupDirs)' ++ echo hetzner1/20170901-052001 ++ tr / _ + archiveName=hetzner1_20170901-052001 ++ date -u --rfc-3339=seconds + timestamp='2018-03-31 21:38:39+00:00' + fileListFilePath=/home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt + archiveFilePath=/home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar + echo ================================================================================ + echo 'This file is metadata for the archive '\hetzner1_20170901-052001'\. In it, we list all the files included in the compressed/encrypted archive (produced using '\ls -lahR hetzner1/20170901-052001'\), including the files within the tarballs within the archive (produced using '\find hetzner1/20170901-052001 -type f -exec tar -tvf '\{}'\ \; '\)' + echo '' + echo ' - Michael Altfield <maltfield@opensourceecology.org>' + echo '' + echo ' Note: this file was generated at 2018-03-31 21:38:39+00:00' + echo ================================================================================ + echo '#############################' + echo '# '\ls -lahR'\ output follows #' + echo '#############################' + ls -lahR hetzner1/20170901-052001 + echo ================================================================================ + echo '############################' + echo '# tarball contents follows #' + echo '############################' + find hetzner1/20170901-052001 -type f -exec tar -tvf '{}' ';' + echo ================================================================================ + bzip2 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt.bz2 Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt rm: cannot remove ‘/home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt’: No such file or directory + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload --name 'hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name' /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt.bz2.gpg usage: glacier.py archive upload [-h] [--name NAME] vault file glacier.py archive upload: error: too few arguments + tar -cvf /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar hetzner1/20170901-052001/ hetzner1/20170901-052001/ hetzner1/20170901-052001/public_html/ hetzner1/20170901-052001/public_html/public_html.20170901-052001.tar.bz2 + gpg --symmetric --cipher-algo aes --passphrase-file /home/marcin_ose/backups/ose-backups-cron.key /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar Reading passphrase from file descriptor 3 + rm /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar + /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload --name 'hetzner1_20170901-052001.tar.gpg: this is an encrypted tarball of a backup from our ose server taken at the time that the archive description prefix indicates' /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.tar.gpg usage: glacier.py archive upload [-h] [--name NAME] vault file glacier.py archive upload: error: too few arguments marcin_ose@hancock:~/backups$
- damn, I missed the vault name in the command. did it manually & updated the script
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/sandbox/glacier-cli/glacier.py --region us-west-2 archive upload --name 'hetzner1_20170901-052001.fileList.txt.bz2.gpg: this is a metadata file showing the file and dir list contents of the archive of the same prefix name' deleteMeIn2020 /home/marcin_ose/backups/uploadToGlacier/hetzner1_20170901-052001.fileList.txt.bz2.gpg hancock%
- a backup isn't a backup until a restore has been tested, so I'm going to fully test a restore of this 'hetzner1/20170901-052001' from a distinct machine (hetzner2) and verify that I can download the archive, decrypt it, extract it, and--finally--cat a config file (as an end-to-end test of the restore procedure)
- I successfully installed glacier-cli on hetzner2 & documented how on the wiki OSE_Server#Restore_from_Glacier
- I kicked-off an inventory update on hetnzer2; this will take ~4 hours. I'll have to wait until tomorrow to kick-off the job to download the 'hetzner1/20170901-052001' backup from Glacier onto hetzner2.
[root@hetzner2 glacier-cli]# ./glacier.py --region us-west-2 vault sync --max-age=0 --wait deleteMeIn2020
- checked the aws console for the billing dashboard; it now shows $0.02. $0.01 for 6 requests to glacier & $0.01 for 0.062 GB-Mo for "USW2-TimedStorage-ByteHrs"
- for some reason S3 popped-up as showing 3 requests. We're well within the free-tier for these requests, but I guess that might be glacier-cli doing some s3 requests?
Fri Mar 30, 2018
- tried to install aws-cli on dreamhost, just for our user
# create temporary directory tmpdir=`mktemp -d` pushd "$tmpdir" /usr/bin/wget "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" /usr/bin/unzip awscli-bundle.zip ./awscli-bundle/install -b ~/bin/aws # cleanly exit popd /bin/rm -rf "$tmpdir" exit 0
- unfortunately, we still got a "permission denied"
hancock% ./awscli-bundle/install -b ~/bin/aws zsh: permission denied: ./awscli-bundle/install hancock%
- I tried switching from the default shell (zsh) to bash & ensured I had execute permissions, but the same issue occurred
- I checked the shebang, and I saw it was '/usr/bin/env/python'. So I just tried to execute it with the python shell, and that seemed to work:
marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ python ./awscli-bundle/install -b ~/bin/aws Running cmd: /usr/bin/python virtualenv.py --no-download --python /usr/bin/python /home/marcin_ose/.local/lib/aws Running cmd: /home/marcin_ose/.local/lib/aws/bin/pip install --no-index --find-links file:///tmp/tmp.zfN3S0BJAC/awscli-bundle/packages awscli-1.14.67.tar.gz You can now run: /home/marcin_ose/bin/aws --version marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ /usr/bin/env python
- execution of the now "installed" aws-cli package still failed
marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ ls -lah /home/marcin_ose/bin/aws lrwxrwxrwx 1 marcin_ose pg1589252 39 Mar 30 09:28 /home/marcin_ose/bin/aws -> /home/marcin_ose/.local/lib/aws/bin/aws marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ /home/marcin_ose/bin/aws --version bash: /home/marcin_ose/bin/aws: Permission denied marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ /home/marcin_ose/.local/lib/aws/bin/aws --version bash: /home/marcin_ose/.local/lib/aws/bin/aws: Permission denied marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$
- calling this with the built-in python does not work
marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ /usr/bin/python /home/marcin_ose/.local/lib/aws/bin/aws --version Traceback (most recent call last): File "/home/marcin_ose/.local/lib/aws/bin/aws", line 19, in <module> import awscli.clidriver ImportError: No module named awscli.clidriver marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$
- confirmed it's python, but a different path as the built-in python
marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ head -n1 /home/marcin_ose/.local/lib/aws/bin/aws #!/home/marcin_ose/.local/lib/aws/bin/python marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$
- calling this through python does work
marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/.local/lib/aws/bin/aws --version aws-cli/1.14.67 Python/2.7.6 Linux/3.2.61-grsec-modsign botocore/1.9.20 marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$
- calling this with the short-hand symlink in "~/bin/" also does work
marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws --version aws-cli/1.14.67 Python/2.7.6 Linux/3.2.61-grsec-modsign botocore/1.9.20 marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$
- got the command to test the aws-cli & creds
marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$ /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier list-vaults usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help aws: error: argument --account-id is required marcin_ose@hancock:/tmp/tmp.zfN3S0BJAC$
- saved the creds for the 'backup-cron' user's access keys to the configuration on our shared dreamhost's server (hancock)
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws configure AWS Access Key ID [None]: CHANGEME AWS Secret Access Key [None]: CHANEME Default region name [None]: us-west-2 Default output format [None]: hancock%
- this produced the following config files. note that we use 'us-west-2' for the default region. Not 'usw2'. Not 'us-west-2a'.
hancock% cat ~/.aws/credentials [default] aws_access_key_id = CHANGEME aws_secret_access_key = CHANGEME hancock% cat ~/.aws/config [default] region = us-west-2 hancock%
- successfully queried for the vaults, and got the vault I created in the console yesterday
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier list-vaults --account-id '099400651767' { "VaultList": [ { "SizeInBytes": 0, "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "CreationDate": "2018-03-29T21:36:06.041Z", "VaultName": "deleteMeIn2020", "NumberOfArchives": 0 } ] }
- note that this fails if I leave off the '--account-id' argument
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier list-vaults usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help aws: error: argument --account-id is required hancock%
- but also note that I can let it default to the account associated with my access key by using the hyphen (-)
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier list-vaults --account-id '-' { "VaultList": [ { "SizeInBytes": 0, "VaultARN": "arn:aws:glacier:us-west-2:099400651767:vaults/deleteMeIn2020", "CreationDate": "2018-03-29T21:36:06.041Z", "VaultName": "deleteMeIn2020", "NumberOfArchives": 0 } ] }
- found the command to upload 'archives' into 'vaults'. Note that I should also use '--archive-description' to describe what the backup is, to reduce the risk of costly, superfluous future retrievals
aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description 'blah blah' --body archive.tar.bz2.gpg
- began further cleanup of the hetzner1 & hetzner2 directories on dreamhost's hancock. We definitely want to keep the oldest backup that I made, but then we only need one from the first of every month. Here's what it was before cleanup (total of 410G)
hancock% du -sh hetzner1/* 39G hetzner1/20170701-052001 519M hetzner1/20170718-052001 39G hetzner1/20170801-052001 217M hetzner1/20170802-052001 12G hetzner1/20170901-052001 2.3G hetzner1/20170902-052001 12G hetzner1/20171001-052001 2.4G hetzner1/20171002-052001 12G hetzner1/20171101-062001 2.5G hetzner1/20171102-062001 12G hetzner1/20171201-062001 15G hetzner1/20171201-214116 2.9G hetzner1/20171202-062001 12G hetzner1/20180101-062001 3.1G hetzner1/20180102-062001 27G hetzner1/20180201-062001 241M hetzner1/20180202-062002 28G hetzner1/20180301-062002 28G hetzner1/20180301-150405 254M hetzner1/20180302-062002 0 hetzner1/20180325-052001 12G hetzner1/20180326-052001 12G hetzner1/20180327-052001 12G hetzner1/20180328-052001 12G hetzner1/20180329-052001 12G hetzner1/20180330-052001 hancock% du -sh hetzner2/* 20G hetzner2/20170702-052001 52K hetzner2/20170729-072001 1.7G hetzner2/20170801-072001 1.7G hetzner2/20170901-072001 2.5G hetzner2/20171001-072001 838M hetzner2/20171101-072001 840M hetzner2/20171202-010653 997M hetzner2/20171202-072001 1.1G hetzner2/20180102-072001 14G hetzner2/20180202-072001 26G hetzner2/20180301-150533 25G hetzner2/20180302-072001 0 hetzner2/20180325-072001 2.8G hetzner2/20180326-072001 2.8G hetzner2/20180327-072001 2.8G hetzner2/20180328-072001 2.8G hetzner2/20180329-072001 2.8G hetzner2/20180330-072001 hancock%
- deleted superfluous backups
rm -rf hetzner1/20170718-052001 rm -rf hetzner1/20170802-052001 rm -rf hetzner1/20170902-052001 rm -rf hetzner1/20171002-052001 rm -rf hetzner1/20171102-062001 rm -rf hetzner1/20171201-214116 rm -rf hetzner1/20171202-062001 rm -rf hetzner1/20180102-062001 rm -rf hetzner1/20180202-062002 rm -rf hetzner1/20180301-150405 rm -rf hetzner1/20180302-062002 rm -rf hetzner2/20170729-072001 rm -rf hetzner2/20171202-010653 rm -rf hetzner2/20180301-150533
- And after cleanup (total of 328G)
hancock% du -sh * 247G hetzner1 81G hetzner2 4.0K readme.txt 4.0K uploadToGlacier.py hancock% du -sh hetzner1/* 39G hetzner1/20170701-052001 15M hetzner1/20170718-052001 39G hetzner1/20170801-052001 12G hetzner1/20170901-052001 12G hetzner1/20171001-052001 12G hetzner1/20171101-062001 12G hetzner1/20171201-062001 12G hetzner1/20180101-062001 27G hetzner1/20180201-062001 28G hetzner1/20180301-062002 0 hetzner1/20180325-052001 12G hetzner1/20180326-052001 12G hetzner1/20180327-052001 12G hetzner1/20180328-052001 12G hetzner1/20180329-052001 12G hetzner1/20180330-052001 hancock% du -sh hetzner2/* 20G hetzner2/20170702-052001 1.7G hetzner2/20170801-072001 1.7G hetzner2/20170901-072001 2.5G hetzner2/20171001-072001 838M hetzner2/20171101-072001 997M hetzner2/20171202-072001 1.1G hetzner2/20180102-072001 14G hetzner2/20180202-072001 25G hetzner2/20180302-072001 0 hetzner2/20180325-072001 2.8G hetzner2/20180326-072001 2.8G hetzner2/20180327-072001 2.8G hetzner2/20180328-072001 2.8G hetzner2/20180329-072001 2.8G hetzner2/20180330-072001 hancock%
- I didn't delete the recent few days backups, but those aren't going to glacier. Only these are going to glacier (total 261G):
39G hetzner1/20170701-052001 39G hetzner1/20170801-052001 12G hetzner1/20170901-052001 12G hetzner1/20171001-052001 12G hetzner1/20171101-062001 12G hetzner1/20171201-062001 12G hetzner1/20180101-062001 27G hetzner1/20180201-062001 28G hetzner1/20180301-062002 20G hetzner2/20170702-052001 1.7G hetzner2/20170801-072001 1.7G hetzner2/20170901-072001 2.5G hetzner2/20171001-072001 838M hetzner2/20171101-072001 997M hetzner2/20171202-072001 1.1G hetzner2/20180102-072001 14G hetzner2/20180202-072001 25G hetzner2/20180302-072001
- amazon claims that they encrypt the data on their end...and they hold the encryption keys. That's not good. This data holds many config files with many of our passwords. It's extremely important that it's encrypted before being shipped to a 3rd party cloud provider.
- It would be nice if there were some mechanism for easily retrieving a directory list for a given archive in a glacier vault, without having to download the entire encrypted archive, decrypt it, extract it, then list it. That could easily cost $20 for a given archive. So, if someone needs to restore something, but they don't know exactly which monthly archive they need to restore, an (encrypted) directory list for each archive could save some serious $$.
- I think the best option is, for each of the once-a-month backups listed above, we upload a pair of archives: (1) the compressed & encrypted archive itself and (b) a small compressed & encrypted dir/file list. For the compression, I'll use bz2, but I won't convert existing gz-compressed tarballs to bz2. For encryption, I'll symmetrically encrypt it with a new 4K encryption key using gpg2.
- checked the differences between hetzner1 & hetzner2 backups. Looks like I used bz2 for hetzner1 (probably because space was limited) and gz on hetzner2.
hancock% ls -R hetzner1/20170701-052001 hetzner1/20170701-052001: home mysqldump public_html hetzner1/20170701-052001/home: home.20170705-052001.tar.bz2 hetzner1/20170701-052001/mysqldump: fef forum openswh osemain wiki hetzner1/20170701-052001/mysqldump/fef: mysqldump-fef.20170705-052001.sql.bz2 hetzner1/20170701-052001/mysqldump/forum: mysqldump-forum.20170705-052001.sql.bz2 hetzner1/20170701-052001/mysqldump/openswh: mysqldump-openswh.20170705-052001.sql.bz2 hetzner1/20170701-052001/mysqldump/osemain: mysqldump-osemain.20170705-052001.sql.bz2 hetzner1/20170701-052001/mysqldump/wiki: mysqldump-wiki.20170705-052001.sql.bz2 hetzner1/20170701-052001/public_html: public_html.20170705-052001.tar.bz2 hancock% ls -R hetzner2/20171101-072001 hetzner2/20171101-072001: etc home log mysqldump root www hetzner2/20171101-072001/etc: etc.20171101-072001.tar.gz hetzner2/20171101-072001/home: home.20171101-072001.tar.gz hetzner2/20171101-072001/log: log.20171101-072001.tar.gz hetzner2/20171101-072001/mysqldump: mysqldump.20171101-072001.sql.gz hetzner2/20171101-072001/root: root.20171101-072001.tar.gz hetzner2/20171101-072001/www: www.20171101-072001.tar.gz hancock%
- the smallest backup on this list is 'hetzner2/20171101-072001', so I'll use that one. But first I confirmed that this comand generates our dir/file list from both the gz & bz2 tarballs from both hetzner1 & hetzner2
hancock% time find 'hetzner2/20171101-072001' -type f -exec tar -tvf '{}' \; ... 11.89s user 2.13s system 69% cpu 20.302 total
- I generated a 4K key file for the client-side symmetric encryption of files before sending them to glacier. I added it to my personal encrypted backups & ensured they were in 3x distinct locations of my personal backups. I'm having packet loss to hetzner now, but this should be copied to hetzner & added to keepass asap.
- tested encryption commands & confirmed that decryption works
gpg --symmetric --cipher-algo aes --armor --passphrase-file "/home/marcin_ose/backups/ose-backups-cron.key" hetzner2_20171101-072001.fileList.txt.bz2 gpg --passphrase-file "/home/marcin_ose/backups/ose-backups-cron.key" --output hetzner2_20171101-072001.fileList.txt.bz2 --decrypt hetzner2_20171101-072001.fileList.txt.bz2.asc bzcat hetzner2_20171101-072001.fileList.txt.bz2 | head
- finished a quick script to generate the encrypted file list & encrypted archive. did a manual upload attempt of the file list before adding it to the script; it worked.
hancock% /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description 'this is a metadata file showing the file and dir list contents of the archive of the same name' --body hetzner2_20171101-072001.fileList.txt.bz2.asc { "archiveId": "qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA", "checksum": "a1301459044fa4680af11d3e2d60b33a49de7e091491bd02d497bfd74945e40b", "location": "/099400651767/vaults/deleteMeIn2020/archives/qZJWJ57sBb9Nsz0lPGKruocLivg8SVJ40UiZznG308wSPAS0vXyoYIOJekP81YwlTmci-eWETvsy4Si2e5xYJR0oVUNLadwPVkbkPmEWI1t75fbJM_6ohrNjNkwlyWPLW-lgeOaynA" } hancock%
- tried to confirm this upload in the aws console, but it showed "-" for "# of Archives" in the "deleteMeIn2020" vault "(as of last inventory)". The last inventory was listed as "Not updated yet"
- this is the script I have so far for automating this process. it is not yet complete.
hancock% cat uploadToGlacier.sh #!/bin/bash ############ # SETTINGS # ############ backupDirs="hetzner2/20171101-072001" syncDir="/home/marcin_ose/backups/uploadToGlacier" encryptionKeyFilePath="/home/marcin_ose/backups/ose-backups-cron.key" ############## # DO UPLOADS # ############## for dir in $(echo $backupDirs); do archiveName=`echo ${dir} | tr '/' '_'`; timestamp=`date -u --rfc-3339=seconds` fileListFilePath="${syncDir}/${archiveName}.fileList.txt" archiveFilePath="${syncDir}/${archiveName}.tar.gz" ######################### # archive metadata file # ######################### # first, generate a file list to help the future sysadmin get metadata about the archvie without having to download the huge archive itself echo "================================================================================" > "${fileListFilePath}" echo "This file is metadata for the archive '${archiveName}'. In it, we list all the files included in the compressed/encrypted archive (produced using 'ls -lahR ${dir}'), including the files within the tarballs within the archive (produced using 'find "${dir}" -type f -exec tar -tvf '{}' \; ')" >> "${fileListFilePath}" echo "" >> "${fileListFilePath}" echo " - Michael Altfield <maltfield@opensourceecology.org>" >> "${fileListFilePath}" echo "" >> "${fileListFilePath}" echo " Note: this file was generated at ${timestamp}" >> "${fileListFilePath}" echo "================================================================================" >> "${fileListFilePath}" echo "#############################" >> "${fileListFilePath}" echo "# 'ls -lahR' output follows #" >> "${fileListFilePath}" echo "#############################" >> "${fileListFilePath}" ls -lahR ${dir} >> "${fileListFilePath}" echo "================================================================================" >> "${fileListFilePath}" echo "############################" >> "${fileListFilePath}" echo "# tarball contents follows #" >> "${fileListFilePath}" echo "############################" >> "${fileListFilePath}" find "${dir}" -type f -exec tar -tvf '{}' \; >> "${fileListFilePath}" echo "================================================================================" >> "${fileListFilePath}" # compress the metadata file bzip2 "${fileListFilePath}" # encrypt the metadata file gpg --symmetric --cipher-algo aes --armor --passphrase-file "${encryptionKeyFilePath}" "${fileListFilePath}.bz2" # delete the unencrypted archive rm "${fileListFilePath}" # upload it /home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description 'this is a metadata file showing the file and dir list contents of the archive of the same name' --body "${fileListFilePath}.bz2.asc" ################ # archive file # ################ # generate archive file as a single, compressed file tar -czvf "${archiveFilePath}" "${dir}/" # encrypt the archive file gpg --symmetric --cipher-algo aes --armor --passphrase-file "${encryptionKeyFilePath}" "${archiveFilePath}" # delete the unencrypted archive rm "${archiveFilePath}" # upload it #/home/marcin_ose/.local/lib/aws/bin/python /home/marcin_ose/bin/aws glacier upload-archive --account-id - --vault-name 'deleteMeIn2020' --archive-description 'this is a compressed and encrypted tarball of a backup from our ose server taken at the time the archive name indicates' --body "${archiveFilePath}.asc" done hancock%
- I uploaded both an ascii-armored gpg encrypted version of the file list and one with the --armor arg missing. In the morning I'll test a restore of this small archive. If it works, then I can be confident this backup solution works, and I'll proceed with using it to move all the necessary backups to glacier.
Thr Mar 29, 2018
- Marcin created an aws account & sent me the credentials
- I added the aws credentials to our OSE Keepass
- my greatest fear of using aws is that automation kicks-off costs that spiral out of control. So the first thing I did was check the billing & confirmed it's $0. Then I created a Budget of $1 configure to email ops, me, and Marcin when that is exceeded.
- I also checked the box that says "Receive Free Tier Usage Alerts" so we get alerts when we're approaching the limit of free tier usage limits
- confirmed that there is no way to cap the bill, though it is often requested of Amazon. Probably, they don't want to add this feature! https://forums.aws.amazon.com/thread.jspa?threadID=58127&start=50&tstart=0
- I created a user-specific account for me named 'maltfield'
- I created a Group called 'Administrator' with the preconfigured Managed Policy = AdministratorAccess
- I added my 'maltfield' user to this 'Administrator' group
- created a password policy requiring a minimum of 20 characters
- created a user for Marcin & sent him credentials
- created a policy 'Force_MFA' per this guide & added to the 'Administrator' group https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_users-self-manage-mfa-and-creds.html#tutorial_mfa_step1
- created new group called 'Billing' & attached the default policy 'Billing' to it. Added me & Marcin's users to this group so he could update billing info from his account directly, and so I could monitor our spending.
- this didn't work; the issue was that I needed to enable the checkbox "Activate IAM Access" under the "IAM User and Role Access to Billing Information" section when updating the Account Settings as the root user before federated IAM users could access billing info.
- I removed my 'maltfield' user from the 'Billing' group, and I still had access. So this confirms that the Billing group is superfluous; Administrator is sufficient. I deleted the 'Billing' group.
- went to create a policy for the backup script, but I guess that's only do-able from an instance in ec2
- created a new user 'backup-cron' with pragmatic access only & full access to glacier & s3 services only using new group 'backup-cron'.
- used the aws console to create a new glacier vault in usw2 titled 'deleteMeIn2020' to stash all the backups that predate me & stuff from hetzner1 that we're going to *not* move to hetzner2. This may end up being 1T in size. If there's nothing there that needs to be recovered by 2020, then we should delete the vault entirely. I'll use a different vault for the monthly backups.
- looked into boto & aws-cli. For simplicity, I'll use aws-cli in bash unless there arises a need for boto in python.
Mon Mar 26, 2018
- Marcin asked me about issues with page views on hetzner1's prod wiki site. I responded that these stats are so bad that Mediawiki removed them entirely in 2014 (though we have an older version in prod!). I showed him how to get to hetzner's provided awstats, which should provide the data he wants.
Fri Mar 23, 2018
- Dreamhost very kindly granted my request to extend our deadline to delete our data; they were happy just to hear that we were actively decreasing our usage & working to resolve the issue. They said we have until April 6th not quite the 2-week extension I asked for, but they said "if you need more time, please let me know!"
- confirmed that our daily backups have decreased significantly. hetzner1 is down from 28G to 12G and hetzner2 is down from 25G to 2.8G.
hancock% du -sh hetzner1/* 39G hetzner1/20170701-052001 519M hetzner1/20170718-052001 39G hetzner1/20170801-052001 217M hetzner1/20170802-052001 12G hetzner1/20170901-052001 2.3G hetzner1/20170902-052001 12G hetzner1/20171001-052001 2.4G hetzner1/20171002-052001 12G hetzner1/20171101-062001 2.5G hetzner1/20171102-062001 12G hetzner1/20171201-062001 15G hetzner1/20171201-214116 2.9G hetzner1/20171202-062001 12G hetzner1/20180101-062001 3.1G hetzner1/20180102-062001 27G hetzner1/20180201-062001 241M hetzner1/20180202-062002 28G hetzner1/20180301-062002 28G hetzner1/20180301-150405 254M hetzner1/20180302-062002 0 hetzner1/20180318-062001 28G hetzner1/20180319-062001 28G hetzner1/20180320-062001 12G hetzner1/20180320-220716 12G hetzner1/20180321-062002 12G hetzner1/20180322-062001 12G hetzner1/20180323-062001 hancock% du -sh hetzner2/* 20G hetzner2/20170702-052001 52K hetzner2/20170729-072001 1.7G hetzner2/20170801-072001 1.7G hetzner2/20170901-072001 2.5G hetzner2/20171001-072001 838M hetzner2/20171101-072001 840M hetzner2/20171202-010653 997M hetzner2/20171202-072001 1.1G hetzner2/20180102-072001 14G hetzner2/20180202-072001 26G hetzner2/20180301-150533 25G hetzner2/20180302-072001 0 hetzner2/20180318-072001 25G hetzner2/20180319-072001 25G hetzner2/20180320-072001 2.8G hetzner2/20180320-203228 2.8G hetzner2/20180321-072001 2.8G hetzner2/20180322-072001 2.8G hetzner2/20180323-072001 hancock% <pre> # originally, we had used 587G at the time of their email <pre> We've noticed some content on your account that might run afoul of our Acceptable Use Policy and wanted to draw your attention to it: 572.46G "hancock:/home/marcin_ose/backups" 14.85G "hancock:/home/marcin_ose/hancockCleanupDump_deleteMeIn2018"
- now we're using only 498G
hancock% du -sh /home/marcin_ose/backups 498G /home/marcin_ose/backups hancock% du -sh /home/marcin_ose/hancockCleanupDump_deleteMeIn2018 52K /home/marcin_ose/hancockCleanupDump_deleteMeIn2018 hancock%
- added my email address as a secondary to receive account updates from our dreamhost account
- discovered that amazon breaks uploads to Glacier in chunks. So, when I thought I'd be charged for 1 request for 1 file upload, I was wrong. They charge per 1k requests. So a low chunk size may mean very expensive bills. I should research what the ideal chunk size would be. 32M? 512M? 1G?
- it looks like there might be a surcharge for deleting data <3 months old. That would destroy our nightly backup solution!
- maybe we can use duplicity (which supports s3) to store nightlys in s3, but have first-of-the-months go to glacier instead.
- assuming the worst-case of the bloated backups at 26+28G = 54G per day * 3 days = 162G. usw2 = oregon s3 charges $0.023/G. So 200G in s3 would be $4.6/mo. That's $55/yr. Plus 500G in Glacier = 500G * $0.004 = $2/mo * 12 = $24/yr. So that's $79/yr, and not including the charges per request or charges for retrieval from Glacier.
- researched some options for tools to facilitate storing backups on amazon glacier
- bonus points would be some built-in "cleanup" function to delete all files, except those that meet certain criteria (in our case, it would be: (a) uploaded in the past 3 days, and (b) uploaded on the first of the month)
- https://github.com/basak/glacier-cli
- https://github.com/uskudnik/amazon-glacier-cmd-interface
- https://github.com/vsespb/mt-aws-glacier
Wed Mar 21, 2018
- rsyn'd a copy of hetzner1's files to hetzner2, so we have at least 1 offsite backup of our prod wiki
- Catarina sent me a message that obi has a cert warning. Indeed, the cert expired on 2018-03-17 = 4 days ago
- the cron was running, but it had an error
[root@hetzner2 ~]# date Thu Mar 22 00:09:37 UTC 2018 [root@hetzner2 ~]# cat /etc/cron.d/letsencrypt # once a month, update our letsencrypt cert 20 4 13 * * root /root/bin/letsencrypt/renew.sh &>> /var/log/letsEncryptRenew.log [root@hetzner2 ~]# tail -n 40 /var/log/letsEncryptRenew.log </head><body> <h1>Not Found</h1> <p". Skipping. All renewal attempts failed. The following certs could not be renewed: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem (failure) ------------------------------------------------------------------------------- Processing /etc/letsencrypt/renewal/opensourceecology.org.conf ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Processing /etc/letsencrypt/renewal/openbuildinginstitute.org.conf ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- The following certs are not due for renewal yet: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem (skipped) All renewal attempts failed. The following certs could not be renewed: /etc/letsencrypt/live/openbuildinginstitute.org/fullchain.pem (failure) ------------------------------------------------------------------------------- 1 renew failure(s), 0 parse failure(s) IMPORTANT NOTES: - The following errors were reported by the server: Domain: awstats.openbuildinginstitute.org Type: unauthorized Detail: Invalid response from http://awstats.openbuildinginstitute.org/.well-known/acme-challenge/Fl5qCq0H90b35_6uJJGz7FuR77GPYfV3mFpN6gYxz0A: "<!DOCTYPE HTML PUBLIC "-IETFDTD HTML 2.0//EN"> <html><head> <title>404 Not Found</title> </head><body> <h1>Not Found</h1> <p" To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address. /root/bin/letsencrypt/renew.sh: line 5: service: command not found ...
- fixed it by changing the awstats dir from '/var/www/html/awstats.openbuildinginstitue.org/htdocs/' to '/var/www/html/certbot/htdocs/'
certbot -nv --expand --cert-name openbuildinginstitute.org certonly -v --webroot -w /var/www/html/www.openbuildinginstitute.org/htdocs/ -d www.openbuildinginstitute.org -w /var/www/html/www.openbuildinginstitute.org/htdocs/ -d openbuildinginstitute.org -w /var/www/html/seedhome.openbuildinginstitute.org/htdocs/ -d seedhome.openbuildinginstitute.org -w /var/www/html/certbot/htdocs/ -d awstats.openbuildinginstitute.org /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* nginx -t && service nginx reload
Tue Mar 20, 2018
- We recieved an email from our free, nonprofit account providers at Dreamhost indicating that storing our backups was not permitted, and that we must delete it before March 26th
Hello,
We hope you're enjoying your DreamHost account!
We've noticed some content on your account that might run afoul of our Acceptable Use Policy and wanted to draw your attention to it: 572.46G "hancock:/home/marcin_ose/backups" 14.85G "hancock:/home/marcin_ose/hancockCleanupDump_deleteMeIn2018"
The storage on our servers is designed specifically for hosting websites and their associated content. It's not well suited to being used as a backup solution or other general-use storage solution. That can include (but is not limited to!) using the web server as method of delivering files not intended for website use.
The bad news is that we have to ask you to delete this content from your server within the next seven days (by the end of March 26, 2018). The good news is that we offer a service, DreamObjects, that's custom-built for storing large amounts of data like backups and other large archives.
You can learn more about DreamObjects here: https://www.dreamhost.com/cloud/dreamobjects/ https://help.dreamhost.com/hc/en-us/articles/214823108-What-is-Object-Storage
As a reference, please see our Acceptable Use and Unlimited storage policies in our terms of service:
Acceptable Use Policy: https://www.dreamhost.com/legal/acceptable-use-policy/
Unlimited Policy: https://www.dreamhost.com/legal/unlimited-policy/
Unfortunately, we cannot allow the data we cited above to continue to live on your current web service. We’d prefer that you move the data off yourself at a time that's convenient for you, but if you take no action we will delete the data above in seven days. Please let us know what you plan to do. Please note, we will try our best to ensure your data is not deleted before the date listed above, and your reply is appreciated.
If you feel that we're wrong about your content (hey, it happens!), please contact us with details so that we can investigate the matter for you and take appropriate action.
We appreciate you hosting with us, so please let us know if you have any questions about this message, our policies, or anything else related to your service. We're always available and happy to help!
Thanks! Elizabeth
- I sent a kind reply asking for a 2-week extension
- I sent an email to marcin asking if he had a large external hdd with ~500G of available space
- I did some quick research on our options for long-term storage of ~600G of backup data
- DreamHost recommended their service = DreamObjects. That would be ~$200/yr
- Amazon Glacier would be ~$50/yr
- storage costs ~$24/yr
- "requests" costs <$1<yr (?)
- tx in costs $0
- tx out costs ~$5 per day's backup restore. realistically, ~$20
- we could also get hetzner's 1T storage box for ~$100/yr, but then there's no geographic distribution of the data https://www.hetzner.com/storage-box?country=us
- mega is $100 per month for 1T too (2 months free with a 1 year plan) https://mega.nz/pro
- got a list of current backup data
hancock% date Tue Mar 20 13:07:40 PDT 2018 hancock% pwd /home/marcin_ose/backups hancock% du -sh hetzner1/* 39G hetzner1/20170701-052001 519M hetzner1/20170718-052001 39G hetzner1/20170801-052001 217M hetzner1/20170802-052001 12G hetzner1/20170901-052001 2.3G hetzner1/20170902-052001 12G hetzner1/20171001-052001 2.4G hetzner1/20171002-052001 12G hetzner1/20171101-062001 2.5G hetzner1/20171102-062001 12G hetzner1/20171201-062001 15G hetzner1/20171201-214116 2.9G hetzner1/20171202-062001 12G hetzner1/20180101-062001 3.1G hetzner1/20180102-062001 27G hetzner1/20180201-062001 241M hetzner1/20180202-062002 28G hetzner1/20180301-062002 28G hetzner1/20180301-150405 254M hetzner1/20180302-062002 0 hetzner1/20180315-062001 28G hetzner1/20180316-062001 28G hetzner1/20180317-062001 28G hetzner1/20180318-062001 28G hetzner1/20180319-062001 28G hetzner1/20180320-062001 hancock% du -sh hetzner2/* 20G hetzner2/20170702-052001 52K hetzner2/20170729-072001 1.7G hetzner2/20170801-072001 1.7G hetzner2/20170901-072001 2.5G hetzner2/20171001-072001 838M hetzner2/20171101-072001 840M hetzner2/20171202-010653 997M hetzner2/20171202-072001 1.1G hetzner2/20180102-072001 14G hetzner2/20180202-072001 26G hetzner2/20180301-150533 25G hetzner2/20180302-072001 0 hetzner2/20180315-072001 25G hetzner2/20180316-072001 25G hetzner2/20180317-072001 25G hetzner2/20180318-072001 25G hetzner2/20180319-072001 25G hetzner2/20180320-072001 hancock%
- so the backups on hetzner2 are 25G currently, but much of that is unnecessary wiki staging data that's already on the prod backups in hetzner1
[maltfield@hetzner2 html]$ du -sh * 4.5M awstats.openbuildinginstitute.org 6.4M awstats.opensourceecology.org 8.0K cacti.opensourceecology.org.old 20K certbot 240M fef.opensourceecology.org 2.9G forum.opensourceecology.org 6.3M munin 8.0K munin.opensourceecology.org 12K openbuildinginstitute.org 124M oswh.opensourceecology.org 75M seedhome.openbuildinginstitute.org 12K SITE_DOWN du: cannot read directory ‘staging.openbuildinginstitute.org/htdocs/wp-content/uploads/2017/12’: Permission denied 490M staging.openbuildinginstitute.org 506M staging.opensourceecology.org 16K varnishTest 31G wiki.opensourceecology.org 493M www.openbuildinginstitute.org 507M www.opensourceecology.org
- I moved that data to /var/tmp temporarily so we could get a small backup to store off dreamhost temporarily so we have at least 1 good, recent offsite backup
[root@hetzner2 ~]# du -sh /var/www/html/* 4.5M /var/www/html/awstats.openbuildinginstitute.org 6.4M /var/www/html/awstats.opensourceecology.org 8.0K /var/www/html/cacti.opensourceecology.org.old 20K /var/www/html/certbot 240M /var/www/html/fef.opensourceecology.org 2.9G /var/www/html/forum.opensourceecology.org 6.3M /var/www/html/munin 8.0K /var/www/html/munin.opensourceecology.org 12K /var/www/html/openbuildinginstitute.org 124M /var/www/html/oswh.opensourceecology.org 75M /var/www/html/seedhome.openbuildinginstitute.org 12K /var/www/html/SITE_DOWN 490M /var/www/html/staging.openbuildinginstitute.org 506M /var/www/html/staging.opensourceecology.org 16K /var/www/html/varnishTest 50M /var/www/html/wiki.opensourceecology.org 493M /var/www/html/www.openbuildinginstitute.org 507M /var/www/html/www.opensourceecology.org [root@hetzner2 ~]#
- discovered that the backups on hetzner1 included the 'tmp/backups_for_migration_to_hetzner2/' dir, which included a copy of all the docroot
osemain@dedi978:~/backups/sync/home/usr/home/osemain$ du -sh ~/tmp/backups_for_migration_to_hetzner2/* 315M /usr/home/osemain/tmp/backups_for_migration_to_hetzner2/fef_20171202 159M /usr/home/osemain/tmp/backups_for_migration_to_hetzner2/fef_20180103 356M /usr/home/osemain/tmp/backups_for_migration_to_hetzner2/osemain_20180103 358M /usr/home/osemain/tmp/backups_for_migration_to_hetzner2/osemain_20180206 355M /usr/home/osemain/tmp/backups_for_migration_to_hetzner2/osemain_20180301 32M /usr/home/osemain/tmp/backups_for_migration_to_hetzner2/oswh_20171220 12G /usr/home/osemain/tmp/backups_for_migration_to_hetzner2/wiki_20180120 osemain@dedi978:~/backups/sync/home/usr/home/osemain$
- moved the 'tmp' dir into the 'noBackup' dir, which is already excluded from backups
- confirmed that the hetzner2 backups shrunk significantly following the wiki docroot move. From 25G to 2.8G!
[root@hetzner2 www]# time /bin/nice /root/backups/backup.sh &>> /var/log/backups/backup.log real 19m1.660s user 3m20.255s sys 0m17.407s [root@hetzner2 www]# [root@hetzner2 www]# # when finished, SSH into the dreamhost server to verify that the whole system backup was successful before proceeding [root@hetzner2 www]# bash -c 'source /root/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh backups/hetzner2/*' 20G backups/hetzner2/20170702-052001 52K backups/hetzner2/20170729-072001 1.7G backups/hetzner2/20170801-072001 1.7G backups/hetzner2/20170901-072001 2.5G backups/hetzner2/20171001-072001 838M backups/hetzner2/20171101-072001 840M backups/hetzner2/20171202-010653 997M backups/hetzner2/20171202-072001 1.1G backups/hetzner2/20180102-072001 14G backups/hetzner2/20180202-072001 26G backups/hetzner2/20180301-150533 25G backups/hetzner2/20180302-072001 0 backups/hetzner2/20180315-072001 25G backups/hetzner2/20180316-072001 25G backups/hetzner2/20180317-072001 25G backups/hetzner2/20180318-072001 25G backups/hetzner2/20180319-072001 25G backups/hetzner2/20180320-072001 2.8G backups/hetzner2/20180320-203228 [root@hetzner2 www]#
- uploaded the 2.8G backup to a new mega account
Fri Mar 16, 2018
- Discovered how to get some info out of the api. For example, you can get some useful info from siteinfo:
- http://opensourceecology.org/w/api.php?action=query&meta=siteinfo
- https://www.mediawiki.org/w/api.php?action=query&meta=siteinfo
- unfortunately, this (and many other places in Mediawiki, such as Special:Version) leaks server info
- There was a response to my question about the Chonology Protector cookie preventing varnish caching when Message Caching uses the DB (per default) in MediaWiki on the Support Desk https://www.mediawiki.org/wiki/Topic:U9fys4phj04a85vu
- I was told this may be a bug (I'm not so certain yet), and that most large wikis that use varnish caching also use memcached for object caching.
- I want to avoid having to run a memcached server if possible, but I did some research into it anyway https://www.mediawiki.org/wiki/Manual:Memcached
- the Wikimediawiki:Manual:Memcached article explicitly says that smaller sites should consider using APC with CACHE_ACCEL instead of memcached; that may be a better option for our Message Cache
Memcached is likely more trouble than a small site will need (for a single server, consider storing both code and data in APC - CACHE_ACCEL), but for a larger site with heavy load, like Wikipedia, it should help lighten the load on the database servers by caching data and objects in memory. </blockqoute>
- began researching using a bytecode accelerator for php + using its cache instead of the database in mediawiki
- mediawiki recommends using OPcache, which is built into PHP >=5.5.0 (we're using php v5.6.33) https://www.mediawiki.org/wiki/Manual:Performance_tuning#Bytecode_caching
<blockqoute> PHP works by compiling a PHP file into bytecode and then executing that bytecode. The process of compiling a large application such as MediaWiki takes considerable time. PHP accelerators work by storing the compiled bytecode and executing it directly reducing the time spent compiling code.
OPcache is included in PHP 5.5.0 and later and the recommended accelerator for MediaWiki. If unavailable, APC is a decent alternative. Other supported op code caches are: mmTurck, WinCache, XCache.
Opcode caches store the compiled output of PHP scripts, greatly reducing the amount of time needed to run a script multiple times. MediaWiki does not need to be configured to do PHP bytecode caching and will "just work" once installed and enabled them.
- but we can't use OPcache for the mediawiki caching (ie: message caching) since it is only a opcode cache, not an object cache that mediawiki can utilize https://phabricator.wikimedia.org/T58652
- the second-best option appears to be APC
- apc is currently not installed on hetzner2
[root@hetzner2 includes]# rpm -qa | grep -i apc [root@hetzner2 includes]#
- I went to install APC, but apparently it's not in the repos. Instead, there's APCU. Apparently while (as shown above) OPcache only does opcode caching & no object caching, APC does both opcode caching & object caching. And APCU does no opcode caching & object caching https://wordpress.stackexchange.com/questions/174317/caching-apc-vs-apcu-vs-opcache
- It's my best judgement that the 2018 solution is to use PHP's built-in OPcache for opcode caching + APCU for object caching.
[root@hetzner2 includes]# yum install php56w-pecl-apcu ... Installed: php56w-pecl-apcu.x86_64 0:4.0.11-2.w7 Complete! [root@hetzner2 includes]#
- I stumbled upon this guide "How to make MediaWiki fast" by Aaron Schulz. Jesús Martínez Novo (User:Ciencia_Al_Poder) told me that, if I file a bug report for the cpPosTime cookie issue, I should CC Aaron https://www.mediawiki.org/wiki/User:Aaron_Schulz/How_to_make_MediaWiki_fast
- It suggests using CACHE_ACCEL (so APCU) for MainCache & MessageCache
- surprisingly, it suggests using $wgCacheDirectory so mw will cache "interface message caching" in files
- surprisingly, it suggests using the db (CACHE_DB) for ParserCache
- it does suggest use of memcached, but I think this is unnecessary until there's >1 server.
- It suggests setting $wgJobRunRate to 0 & creating a systemd daemon that executes mw jobs in the background per this guide https://www.mediawiki.org/wiki/Manual:Job_queue
- It suggests increasing the realpath_cache_size in php.ini, possibly to 512k or more. I checked the php documentation, and I found that this defaulted to 16K in PHP <= 7.0.16. Latest versions of php, however, set this to 4M.
- It suggests enabling $wgMiserMode, which disables some features in MediaWiki that are db-intensive
- I'm going to try the above advice instead of stripping the cookie in varnish
- first, I stripped my (failed) set-cookie-removing logic from varnish, reloaded the varnish config, and verified that the cookie was still present
- first, I replaced LocalSettings with the version that was in-use before I started this debugging process. It will enable the extensions again, among other changes
- then I banned the cache & tested again, confirming that the cookie was still present
- now, on the pre-debug config, I disabled the Message Cache & confirmed that the cookie was missing, confirming that I've isolated the issue to this variable
- confirmed that apc appears to be enabled (after I restarted apache)
- I stumbled upon this guide "How to make MediaWiki fast" by Aaron Schulz. Jesús Martínez Novo (User:Ciencia_Al_Poder) told me that, if I file a bug report for the cpPosTime cookie issue, I should CC Aaron https://www.mediawiki.org/wiki/User:Aaron_Schulz/How_to_make_MediaWiki_fast
[root@hetzner2 ~]# php -i | grep -i apc Additional .ini files parsed => /etc/php.d/apcu.ini, apc APC support => Emulated apcu APCu Support => Disabled APCu Debugging => Disabled MMAP File Mask => /tmp/apc.XXXXXX apc.coredump_unmap => Off => Off apc.enable_cli => Off => Off apc.enabled => On => On apc.entries_hint => 4096 => 4096 apc.gc_ttl => 3600 => 3600 apc.mmap_file_mask => /tmp/apc.XXXXXX => /tmp/apc.XXXXXX apc.preload_path => no value => no value apc.rfc1867 => Off => Off apc.rfc1867_freq => 0 => 0 apc.rfc1867_name => APC_UPLOAD_PROGRESS => APC_UPLOAD_PROGRESS apc.rfc1867_prefix => upload_ => upload_ apc.rfc1867_ttl => 3600 => 3600 apc.serializer => default => default apc.shm_segments => 1 => 1 apc.shm_size => 64M => 64M apc.slam_defense => On => On apc.smart => 0 => 0 apc.ttl => 7200 => 7200 apc.use_request_time => On => On apc.writable => /tmp => /tmp [root@hetzner2 ~]#
- since /tmp is necessarily word-readable & word-writeable, we don't use it for services that are publicly accessible, such as our web server. so I updated apc's config (/etc/php.d/apcu.ini) to use /var/lib/php/apcu
mkdir -p /var/lib/php/apcu/writable chown -R root:apache /var/lib/php/apcu chmod -R 0770 /var/lib/php/apcu cp /etc/php.d/apcu.ini /etc/php.d/apcu.ini.`date "+%Y%m%d_%H%M%S"`.bak sed -i 's^/tmp/apc.XXXXXX^/var/lib/php/apcu/tmp.XXXXXX^g' /etc/php.d/apcu.ini echo "apc.writable=/var/lib/php/apcu/writable" >> /etc/php.d/apcu.ini
- updated LocalSettings.php so Message Cache uses APCU with "$wgMessageCacheType = CACHE_ACCEL;"
- tested a query, and saw some apc-related options
[objectcache] The APCu extension is loaded and the apc.serializer INI setting is set to "default". This can cause memory corruption! You should change apc.serializer to "php" instead. See <https://github.com/krakjoe/apcu/issues/38>. [objectcache] The APCu extension is loaded and the apc.serializer INI setting is set to "default". This can cause memory corruption! You should change apc.serializer to "php" instead. See <https://github.com/krakjoe/apcu/issues/38>. IP: 127.0.0.1 Start request GET /wiki/Michael_Log HTTP HEADERS: X-REAL-IP: 138.201.84.223 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.29.0 ACCEPT: */* X-FORWARDED-FOR: 127.0.0.1 ACCEPT-ENCODING: gzip X-VARNISH: 54014 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: APCBagOStuff, session: APCBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [session] SessionBackend "7155kia65vvl9a18q74m7n41u1s0egor" is unsaved, marking dirty in constructor [session] SessionBackend "7155kia65vvl9a18q74m7n41u1s0egor" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489707345", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489707345", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489707345", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489707345", "/", "", "", "1" [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.29.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. Title::getRestrictionTypes: applicable restrictions to Michael Log are {edit,move} [ContentHandler] Created handler for wikitext: WikitextContentHandler Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::checkLastModified: client did not send If-Modified-Since header MediaWikiGadgetsDefinitionRepo::fetchStructuredList: MediaWiki:Gadgets-definition parsed, cache entry should be updated [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is expired/volatile, loading from database Unstubbing $wgParser on call of $wgParser::firstCallInit from MessageCache->getParser Parser: using preprocessor: Preprocessor_DOM Unstubbing $wgLang on call of $wgLang::_unstub from ParserOptions->__construct [caches] parser: APCBagOStuff Article::view using parser cache: yes Parser cache options found. ParserOutput cache found. Article::view: showing parser cache contents MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: local proxy caching; Fri, 16 Mar 2018 22:59:28 GMT ** [DBConnection] Connected to database 0 at 'localhost'. Request ended normally [session] Saving all sessions on shutdown [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'. [objectcache] The APCu extension is loaded and the apc.serializer INI setting is set to "default". This can cause memory corruption! You should change apc.serializer to "php" instead. See <https://github.com/krakjoe/apcu/issues/38>. [objectcache] The APCu extension is loaded and the apc.serializer INI setting is set to "default". This can cause memory corruption! You should change apc.serializer to "php" instead. See <https://github.com/krakjoe/apcu/issues/38>. IP: 127.0.0.1 Start request GET /wiki/Michael_Log HTTP HEADERS: X-REAL-IP: 138.201.84.223 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.29.0 ACCEPT: */* X-FORWARDED-FOR: 127.0.0.1 ACCEPT-ENCODING: gzip X-VARNISH: 54017 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: APCBagOStuff, session: APCBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [session] SessionBackend "iognvkk9eue6d21086m46t1jqri1so05" is unsaved, marking dirty in constructor [session] SessionBackend "iognvkk9eue6d21086m46t1jqri1so05" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489707347", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489707347", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489707347", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489707347", "/", "", "", "1" [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.29.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. Title::getRestrictionTypes: applicable restrictions to Michael Log are {edit,move} [ContentHandler] Created handler for wikitext: WikitextContentHandler Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::checkLastModified: client did not send If-Modified-Since header MediaWikiGadgetsDefinitionRepo::fetchStructuredList: MediaWiki:Gadgets-definition parsed, cache entry should be updated [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is expired/volatile, loading from database Unstubbing $wgParser on call of $wgParser::firstCallInit from MessageCache->getParser Parser: using preprocessor: Preprocessor_DOM Unstubbing $wgLang on call of $wgLang::_unstub from ParserOptions->__construct [caches] parser: APCBagOStuff Article::view using parser cache: yes Parser cache options found. ParserOutput cache found. Article::view: showing parser cache contents MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: local proxy caching; Fri, 16 Mar 2018 22:59:28 GMT ** [DBConnection] Connected to database 0 at 'localhost'. Request ended normally [session] Saving all sessions on shutdown [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'.
- updated 'apc.serializer' from 'default' to 'php' per the warning message's guidance. apache had to be restarted for the error to go away
IP: 127.0.0.1 Start request GET /wiki/Michael_Log HTTP HEADERS: X-REAL-IP: 138.201.84.223 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.29.0 ACCEPT: */* X-FORWARDED-FOR: 127.0.0.1 ACCEPT-ENCODING: gzip X-VARNISH: 318849 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: APCBagOStuff, session: APCBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [session] SessionBackend "fu5f7vbkir4v3e4s0dc6ch2osjqr2u8p" is unsaved, marking dirty in constructor [session] SessionBackend "fu5f7vbkir4v3e4s0dc6ch2osjqr2u8p" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489707577", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489707577", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489707577", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489707577", "/", "", "", "1" [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.29.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. Title::getRestrictionTypes: applicable restrictions to Michael Log are {edit,move} [ContentHandler] Created handler for wikitext: WikitextContentHandler Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::checkLastModified: client did not send If-Modified-Since header MediaWikiGadgetsDefinitionRepo::fetchStructuredList: MediaWiki:Gadgets-definition parsed, cache entry should be updated [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is empty, loading from database Unstubbing $wgParser on call of $wgParser::firstCallInit from MessageCache->getParser Parser: using preprocessor: Preprocessor_DOM Unstubbing $wgLang on call of $wgLang::_unstub from ParserOptions->__construct [caches] parser: APCBagOStuff Article::view using parser cache: yes Article::view: doing uncached parse Saved in parser cache with key osewiki_db-wiki_:pcache:idhash:28902-0!canonical and timestamp 20180316233937 and revision id 164196 MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: local proxy caching; Fri, 16 Mar 2018 22:59:28 GMT ** [DBConnection] Connected to database 0 at 'localhost'. Request ended normally [session] Saving all sessions on shutdown [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'.
- there doesn't appear to be any files in /var/lib/php/apcu, but it does appear to be using APC as a cache
- updated LocalSettings.php with the optimization recommendations discovered above, including creating a cache dir outside the docroot
mkdir /var/www/html/wiki.opensourceecology.org/cache chown apache:apache /var/www/html/wiki.opensourceecology.org/cache chmod 0770 /var/www/html/wiki.opensourceecology.org/cache
- verifed this dir was empty
[root@hetzner2 wiki.opensourceecology.org]# ls -lahR cache cache: total 8.0K drwxrwx--- 2 apache apache 4.0K Mar 16 23:52 . d---r-x--- 4 not-apache apache 4.0K Mar 16 23:55 .. [root@hetzner2 wiki.opensourceecology.org]#
- updated LocalSettings.php to use this dir
## Set $wgCacheDirectory to a writable directory on the web server ## to make your wiki go slightly faster. The directory should not ## be publically accessible from the web. $wgCacheDirectory = "$IP/../cache";
- tried to query the page, and confirmed that it created a file
[root@hetzner2 wiki.opensourceecology.org]# ls -lahR cache cache: total 1.1M drwxrwx--- 2 apache apache 4.0K Mar 16 23:55 . d---r-x--- 4 not-apache apache 4.0K Mar 16 23:55 .. -rw-r--r-- 1 apache apache 1.1M Mar 16 23:55 l10n_cache-en.cdb [root@hetzner2 wiki.opensourceecology.org]#
- I'm going to skip the changes to wgJobRunRate + systemd file. That would be something that would easily slip through the cracks, and I'm not sure how much speed benefit it adds--especially if most traffic is served by the varnish cache
- I updated /etc/php.ini to use "realpath_cache_size = 4M"
- I organized these changes into a well-commented section of LocalSettings.php as follows
################# # OPTIMIZATIONS # ################# # See these links for more info: # * https://www.mediawiki.org/wiki/Manual:Performance_tuning # * https://www.mediawiki.org/wiki/Manual:Caching # * https://www.mediawiki.org/wiki/User:Aaron_Schulz/How_to_make_MediaWiki_fast # INTERNAL MEDIAWIKI CACHE OPTIONS (DISTINCT FROM VARNISH) # MainCache and MessageCache should use APCU per Aaron Schulz $wgMainCacheType = CACHE_ACCEL; # note that if message cache uses the db (per defaults), then it may make every # page load include a db change, which causes mediawiki to emmit a set-cookie # for cpPosTime. The cookie's presence coming from the backend causes varnish # not to cache the page (rightfully so), and the result is that varnish (which # is our most important cache) is rendered useless. For more info, see: # * https://www.mediawiki.org/wiki/Topic:U9fys4phj04a85vu # * https://wiki.opensourceecology.org/wiki/Maltfield_log_2018#Thr_Mar_15.2C_2018 $wgMessageCacheType = CACHE_ACCEL; $wgUseLocalMessageCache = true; # Parser Cache should still use the DB per Aaron Schulz $wgParserCacheType = CACHE_DB; # enable caching navigation sidebar per Aaron Schulz $wgEnableSidebarCache = true; # cache interface messages to files in this directory per Aaron Schulz # note that this should be outside the docroot! $wgCacheDirectory = "$IP/../cache"; # OTHER OPTIMIZATIONS # decrease db-heavy features per Aaron Schulz $wgMiserMode = true; # Causes serious encoding problems $wgUseGzip = false;
- updated the support ticket with my solution to the possible-bug = use APCU for Message Cache https://www.mediawiki.org/wiki/Topic:U9fys4phj04a85vu
- tested a login & was able to login successfully
- confirmed that varnish wasn't hashing my views while logged-in
- my first 2x queries after I clicked logout were cache passes (misses) because my cookie was present, but my 3rd query didn't have the cookie, and I got a cache hit
- began testing purge integration
- made an edit to a page in one vm & loaded it in another vm; it didn't load. There appears to be issues with purge integration
Thr Mar 15, 2018
- ossec is still alerting on wp-login.php attempts, which were going to the 'default' server. so I made the www.opensourceecology.org vhost & www.openbuildinginstitute.org (which are distinct ip addresses) explicitly the "default_server" with server_name = "_" in their configs. those configs reject that query, so it should prevent false-positives https://serverfault.com/questions/527156/setting-nginx-to-catch-all-unhandled-vhosts
- confirmed that hitting the IPs directly gives me a 403 Forbidden
user@personal:~/sandbox/varnishkafka$ curl -ki "https://138.201.84.243/wp-login.php" HTTP/1.1 403 Forbidden Server: nginx Date: Thu, 15 Mar 2018 15:00:43 GMT Content-Type: text/html; charset=iso-8859-1 Content-Length: 214 Connection: keep-alive X-VC-Req-Host: 138.201.84.243 X-VC-Req-URL: /wp-login.php X-VC-Req-URL-Base: /wp-login.php X-VC-Cacheable: NO:Not cacheable, ttl: 0.000 X-Varnish: 4203 Age: 0 Via: 1.1 varnish-v4 <!DOCTYPE HTML PUBLIC "-IETFDTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access /wp-login.php on this server.</p> </body></html> user@personal:~/sandbox/varnishkafka$ user@personal:~/sandbox/varnishkafka$ user@personal:~/sandbox/varnishkafka$ user@personal:~/sandbox/varnishkafka$ curl -ki "https://138.201.84.223/wp-login.php" HTTP/1.1 403 Forbidden Server: nginx Date: Thu, 15 Mar 2018 15:01:20 GMT Content-Type: text/html; charset=iso-8859-1 Content-Length: 214 Connection: keep-alive X-VC-Req-Host: 138.201.84.223 X-VC-Req-URL: /wp-login.php X-VC-Req-URL-Base: /wp-login.php X-VC-Cacheable: NO:Not cacheable, ttl: 0.000 X-Varnish: 111304 Age: 0 Via: 1.1 varnish-v4 <!DOCTYPE HTML PUBLIC "-IETFDTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access /wp-login.php on this server.</p> </body></html> user@personal:~/sandbox/varnishkafka$
- spent some time uploading graphs to my logs, because a picture speaks a thousand words.
- munin shows the change yesterday (disabling the Fundraising wordpress plugin) had a huge impact on our varnish hit rates. Note that the spike/dip ~6 hours after the change was me restarting varnish (I was debugging an unrelated issue with our staging wiki + varnish integration)
- munin also shows a decrease in cpu usage coorelating to varnish serving more hits. this is the end goal. right now it's not so apparent because our server isn't heavily trafficed (and our biggest site = our wiki hasn't been migrated yet), but this is a big deal for if/when our site gets hug-of-death'd after, for example, something goes viral on reddit. If varnish is just serving cache hits, our site will continue to run, even with a huge spike of traffic coming in (within reason)
- continued work on integrating mediawiki with varnish
- found old version of wikipedia's puppet configs (with varnish config files) in phabricator https://phabricator.wikimedia.org/source/operations-puppet/browse/production/templates/varnish/;b73afc51fedbe1119f4af35c7d9ab331f78357db
- according to this commit, it was removed in 2016-09 to be moved to a module https://phabricator.wikimedia.org/rOPUPc53f1476ab6cddad6150ce85f1441646ba92444b
- a query for "varnish" in the prod tree shows an entry for 'varnishkafka' in .gitmodules https://phabricator.wikimedia.org/source/operations-puppet/browse/production/?grep=varnish
- but the url it references is a 404!
user@personal:~$ curl -i https://gerrit.wikimedia.org/r/operations/puppet/varnishkafka HTTP/1.1 404 Not Found Date: Thu, 15 Mar 2018 14:30:10 GMT Server: Apache Strict-Transport-Security: max-age=106384710; includeSubDomains; preload Content-Type: text/plain;charset=iso-8859-1 Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: Mon, 01 Jan 1990 00:00:00 GMT Content-Length: 9 Backend-Timing: D=508 t=1521124210965290 Not Founduser@personal:~$
- searched all projects on wikimedia's gerrit for varnish https://gerrit.wikimedia.org/r/#/admin/projects/?filter=varnish
- found a project of the same name https://gerrit.wikimedia.org/r/#/admin/projects/operations/puppet/varnishkafka
- err, it's a 404 on curl, but git understands it
user@personal:~/sandbox$ git clone https://gerrit.wikimedia.org/r/operations/puppet/varnishkafka Cloning into 'varnishkafka'... remote: Total 423 (delta 0), reused 423 (delta 0) Receiving objects: 100% (423/423), 65.54 KiB | 0 bytes/s, done. Resolving deltas: 100% (277/277), done. Checking connectivity... done. user@personal:~/sandbox$
- that git repo doesn't appear to have any vcl configs
user@personal:~/sandbox/varnishkafka$ ls files Gemfile manifests Rakefile README.md templates tox.ini user@personal:~/sandbox/varnishkafka$ find . | grep -i vcl user@personal:~/sandbox/varnishkafka$ grep -irl 'vcl' * user@personal:~/sandbox/varnishkafka$
- it's not totally transparent to me, but I think it's actually varnish (not the backend) that's setting these cookies for wikipedia https://phabricator.wikimedia.org/rOPUPc53f1476ab6cddad6150ce85f1441646ba92444b
- going back to the code that's producing the "set-cookie cpPostTime" in the file 'includes/MediaWiki.php' in function preOutputCommit(), I should figure out what $urlDomainDistance & $lbFactory->hasOrMadeRecentMasterChanges( INF ) are
if ( $urlDomainDistance === 'local' || $urlDomainDistance === 'remote' ) { ... } else { // OutputPage::output() is fairly slow; run it in $postCommitWork to mask // the latency of syncing DB positions accross all datacenters synchronously $flags = $lbFactory::SHUTDOWN_CHRONPROT_SYNC; if ( $lbFactory->hasOrMadeRecentMasterChanges( INF ) && $allowHeaders ) { $cpPosTime = microtime( true ); // Set a cookie in case the DB position store cannot sync accross datacenters. // This will at least cover the common case of the user staying on the domain. $expires = time() + ChronologyProtector::POSITION_TTL; $options = [ 'prefix' => '' ]; error_log( "about to set cpPosTime cookie in spot 2" ); $request->response()->setCookie( 'cpPosTime', $cpPosTime, $expires, $options ); } }
- ironically, this term "Chronology Protector" comes from late & great Stephen Hawking, who passed away earlier this week :( https://en.wikipedia.org/wiki/Chronology_protection_conjecture
- a very relevant mediawiki script to this is includes/libs/rdbms/lbfactory/LBFactory.php. That defines an object = LBFactory with an instance var = $chronProt to determine if Chronology Protector is enabled or not
/** * An interface for generating database load balancers * @ingroup Database */ abstract class LBFactory implements ILBFactory { /** @var ChronologyProtector */ protected $chronProt;
- there's another object = ChronologyProtector which is created by the call to getChronologyProtector()
/** * @return ChronologyProtector */ protected function getChronologyProtector() { if ( $this->chronProt ) { return $this->chronProt; } $this->chronProt = new ChronologyProtector( $this->memStash, [ 'ip' => $this->requestInfo['IPAddress'], 'agent' => $this->requestInfo['UserAgent'], ], isset( $_GET['cpPosTime'] ) ? $_GET['cpPosTime'] : null ); $this->chronProt->setLogger( $this->replLogger ); if ( $this->cliMode ) { $this->chronProt->setEnabled( false ); } elseif ( $this->requestInfo['ChronologyProtection'] === 'false' ) { // Request opted out of using position wait logic. This is useful for requests // done by the job queue or background ETL that do not have a meaningful session. $this->chronProt->setWaitEnabled( false ); } $this->replLogger->debug( METHOD . ': using request info ' . json_encode( $this->requestInfo, JSON_PRETTY_PRINT ) ); return $this->chronProt; }
- got a stack trace from within this getChronologyProtector() to see what called it
[Thu Mar 15 18:53:05.906938 2018] [:error] [pid 25589] [client 127.0.0.1:44696] entered getChronologyProtector() function [Thu Mar 15 18:53:05.907015 2018] [:error] [pid 25589] [client 127.0.0.1:44696] #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/lbfactory/LBFactor y.php(517): Wikimedia\\Rdbms\\LBFactory->getChronologyProtector()\n#1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/lbfactory/LBFactorySimple.php(1 28): Wikimedia\\Rdbms\\LBFactory->baseLoadBalancerParams()\n#2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/lbfactory/LBFactorySimple.php(82): Wik imedia\\Rdbms\\LBFactorySimple->newLoadBalancer(Array)\n#3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/lbfactory/LBFactorySimple.php(91): Wikimed ia\\Rdbms\\LBFactorySimple->newMainLB(false)\n#4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/ServiceWiring.php(62): Wikimedia\\Rdbms\\LBFactorySimple->getMa inLB()\n#5 [internal function]: MediaWiki\\Services\\ServiceContainer->{closure}(Object(MediaWiki\\MediaWikiServices))\n#6 /var/www/html/wiki.opensourceecology.org/htdoc s/includes/services/ServiceContainer.php(361): call_user_func_array(Object(Closure), Array)\n#7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/services/Service Container.php(344): MediaWiki\\Services\\ServiceContainer->createService('DBLoadBalancer')\n#8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWikiServices .php(511): MediaWiki\\Services\\ServiceContainer->getService('DBLoadBalancer')\n#9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php( 180): MediaWiki\\MediaWikiServices->getDBLoadBalancer()\n#10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(267): SqlBagOStuff->ge tDB(0)\n#11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(245): SqlBagOStuff->getMulti(Array)\n#12 /var/www/html/wiki.opensourcee cology.org/htdocs/includes/objectcache/SqlBagOStuff.php(241): SqlBagOStuff->getWithToken('osewiki_db-wiki...', NULL, 0)\n#13 /var/www/html/wiki.opensourceecology.org/htd ocs/includes/libs/objectcache/CachedBagOStuff.php(56): SqlBagOStuff->doGet('osewiki_db-wiki...', 0)\n#14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/ob jectcache/BagOStuff.php(185): CachedBagOStuff->doGet('osewiki_db-wiki...', 0)\n#15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/SessionManager.php(93 8): BagOStuff->get('osewiki_db-wiki...')\n#16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/SessionInfo.php(150): MediaWiki\\Session\\SessionManager-> generateSessionId()\n#17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/SessionProvider.php(176): MediaWiki\\Session\\SessionInfo->__construct(30, Arra y)\n#18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/SessionManager.php(269): MediaWiki\\Session\\SessionProvider->newSessionInfo(NULL)\n#19 /var/www /html/wiki.opensourceecology.org/htdocs/includes/session/SessionManager.php(243): MediaWiki\\Session\\SessionManager->getEmptySessionInternal(Object(WebRequest))\n#20 /v ar/www/html/wiki.opensourceecology.org/htdocs/includes/session/SessionManager.php(193): MediaWiki\\Session\\SessionManager->getEmptySession(Object(WebRequest))\n#21 /var /www/html/wiki.opensourceecology.org/htdocs/includes/WebRequest.php(735): MediaWiki\\Session\\SessionManager->getSessionForRequest(Object(WebRequest))\n#22 /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/SessionManager.php(129): WebRequest->getSession()\n#23 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Setup.php(762): MediaWiki\\Session\\SessionManager::getGlobalSession()\n#24 /var/www/html/wiki.opensourceecology.org/htdocs/includes/WebStart.php(114): require_once('/var/www/html/w...')\n#25 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(40): require('/var/www/html/w...')\n#26 {main}
- so it looks like the DBLoadBalancer is created in includes/services/ServiceContainer.php after being called by includes/objectcache/SqlBagOStuff.php. But I have "$wgMainCacheType = CACHE_NONE;" in LocalSettings.php. Maybe there's some cached objects in the db post-migration that need to be cleared? Or maybe there's anoter var I need to disable all use of SqlBagOStuff that will in-turn fix this issue?
- umm. I just went to retest the cache ban -> GET query -> GET query, and varnishlog showed a successful miss->hit! wtf
- maybe the db was stuck on something at the time, but I cannot reproduce the cookie issue that was causing varnish to store a hit-for-pass now
- ok, I was able to reproduce it again by commenting-out my error_log() debug lines in MediaWiki.php it was probably caused by writing output to the body of the reply before the headers setting the cookie was attempted to be made, blocking the cookie artificially
- spent some time reading up on optimizing mediawiki & the caches it uses
- stumbled on a built-in custom PHP profiler in Mediawiki. This could be essential for debugging https://www.mediawiki.org/wiki/Manual:Profiling
- I enabled more db-related output by setting "$wgDebugDBTransactions = true;" in LocalSettings.php
[DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": "true" } IP: 127.0.0.1 Start request GET /wiki/Special:Version HTTP HEADERS: X-REAL-IP: 198.7.58.245 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.38.0 ACCEPT: */* X-FORWARDED-FOR: 127.0.0.1 X-VARNISH: 264293 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: SqlBagOStuff, session: SqlBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. [SQLBagOStuff] Connection 1062367 will be used for SqlBagOStuff [session] SessionBackend "ei9kj09m8meepl0dnerkvlr8hkd8ku8l" is unsaved, marking dirty in constructor [session] SessionBackend "ei9kj09m8meepl0dnerkvlr8hkd8ku8l" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489609512", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489609512", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489609512", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489609512", "/", "", "", "1" [DBConnection] Connected to database 0 at 'localhost'. [DBPerformance] Expectation (writes <= 0) by MediaWiki::main not met (actual: 1): query-m: REPLACE INTO `wiki_objectcache` (keyname,value,exptime) VALUES ('X') ...
- the output appears to suggest that it's connecting to our one database on localhost twice (is that the cluster that the LB class is thinking it needs replication [and therefore CP] for?)
- the output also says that, yes, we're using one of our db connections for SqlBagOStuf for 'message' and 'session' caching
- tried to test disabling the 'stash' cache (which is set to 'db-replicated' per above, and as is the default in 'includes/DefaultSettings.php' by adding "$wgMainStash = CACHE_NONE;" to LocalSettings.php. This worked, as the next request (after cache ban) output this to the debug file: "[caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: 0, message: SqlBagOStuff, session: SqlBagOStuff"
- I probably don't want this permanent; I'm just iterating on all possible options until I can get the cookie to go away.
- discovered that our LocalSettings.php file's "$wgShowIPinHeader = false" should be removed, as this var is deprecated since mediawiki v1.27.0 https://www.mediawiki.org/wiki/Manual:$wgShowIPinHeader
- I added vars to disable both message & session caches (the ones that were mentioned as using SqlBagOStuff in the debug output above with "$wgMessageCacheType = CACHE_NONE;" & "$wgSessionCacheType = CACHE_NONE;" in LocalSettings.php. The debug output showed it as fixed now, but it still says it's using the db connection for SqlBagOStuff!
[DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": "true" } ... [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: 0, message: EmptyBagOStuff, session: EmptyBagOStuff [caches] LocalisationCache: using store LCStoreDB ... [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is empty, loading from database Unstubbing $wgParser on call of $wgParser::firstCallInit from MessageCache->getParser Parser: using preprocessor: Preprocessor_DOM Unstubbing $wgLang on call of $wgLang::_unstub from ParserOptions->__construct [gitinfo] Computed cacheFile=/var/www/html/wiki.opensourceecology.org/htdocs/gitinfo.json for /var/www/html/wiki.opensourceecology.org/htdocs [gitinfo] Cache incomplete for /var/www/html/wiki.opensourceecology.org/htdocs [Preprocessor] Cached preprocessor output (key: osewiki_db-wiki_:preprocess-xml:3a8077569b0d753b8983c7c44d07b5b3:0) [Preprocessor] Cached preprocessor output (key: osewiki_db-wiki_:preprocess-xml:3a8077569b0d753b8983c7c44d07b5b3:0) Looking up core head id [gitinfo] Computed cacheFile=/var/www/html/wiki.opensourceecology.org/htdocs/gitinfo.json for /var/www/html/wiki.opensourceecology.org/htdocs [gitinfo] Cache incomplete for /var/www/html/wiki.opensourceecology.org/htdocs [DBConnection] Connected to database 0 at 'localhost'. [DBPerformance] Expectation (masterConns <= 0) by MediaWiki::main not met (actual: 1): [connect to localhost (osewiki_db)] #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/TransactionProfiler.php(164): Wikimedia\Rdbms\TransactionProfiler->reportExpectationViolated('masterConns', '[connect to loc...', 1) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/loadbalancer/LoadBalancer.php(678): Wikimedia\Rdbms\TransactionProfiler->recordConnection('localhost', 'osewiki_db', true) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(184): Wikimedia\Rdbms\LoadBalancer->getConnection(-2, Array, false, 1) #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(267): SqlBagOStuff->getDB(0) #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(245): SqlBagOStuff->getMulti(Array) #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(241): SqlBagOStuff->getWithToken('osewiki_db-wiki...', NULL, 0) #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(185): SqlBagOStuff->doGet('osewiki_db-wiki...', 0) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialVersion.php(739): BagOStuff->get('osewiki_db-wiki...') #8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialVersion.php(644): SpecialVersion->getCreditsForExtension('skin', Array) #9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialVersion.php(472): SpecialVersion->getExtensionCategory('skin', NULL) #10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specials/SpecialVersion.php(143): SpecialVersion->getSkinCredits() #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPage.php(522): SpecialVersion->execute(NULL) #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/specialpage/SpecialPageFactory.php(578): SpecialPage->run(NULL) #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(287): SpecialPageFactory::executePath(Object(Title), Object(RequestContext)) #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(861): MediaWiki->performRequest() #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main() #16 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run() #17 {main} [SQLBagOStuff] Connection 1062529 will be used for SqlBagOStuff
- umm, tried again & hits are working fine; here's the debug log
[DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": "true" } IP: 127.0.0.1 Start request GET /wiki/Michael_Log HTTP HEADERS: X-REAL-IP: 198.7.58.245 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.38.0 ACCEPT: */* X-FORWARDED-FOR: 127.0.0.1 ACCEPT-ENCODING: gzip X-VARNISH: 333061 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: 0, message: EmptyBagOStuff, session: EmptyBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [session] SessionBackend "lug2cr578qgttdlit06p4djb0pe30pki" is unsaved, marking dirty in constructor [session] SessionBackend "lug2cr578qgttdlit06p4djb0pe30pki" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489621803", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489621803", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489621803", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489621803", "/", "", "", "1" [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. Title::getRestrictionTypes: applicable restrictions to Michael Log are {edit,move} [ContentHandler] Created handler for wikitext: WikitextContentHandler Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::checkLastModified: client did not send If-Modified-Since header [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is empty, loading from database Unstubbing $wgParser on call of $wgParser::firstCallInit from MessageCache->getParser Parser: using preprocessor: Preprocessor_DOM Unstubbing $wgLang on call of $wgLang::_unstub from ParserOptions->__construct [caches] parser: SqlBagOStuff Article::view using parser cache: yes [DBConnection] Connected to database 0 at 'localhost'. [DBPerformance] Expectation (masterConns <= 0) by MediaWiki::main not met (actual: 1): [connect to localhost (osewiki_db)] #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/TransactionProfiler.php(164): Wikimedia\Rdbms\TransactionProfiler->reportExpectationViolated('masterConns', '[connect to loc...', 1) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/loadbalancer/LoadBalancer.php(678): Wikimedia\Rdbms\TransactionProfiler->recordConnection('localhost', 'osewiki_db', true) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(184): Wikimedia\Rdbms\LoadBalancer->getConnection(-2, Array, false, 1) #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(267): SqlBagOStuff->getDB(0) #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(245): SqlBagOStuff->getMulti(Array) #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(241): SqlBagOStuff->getWithToken('osewiki_db-wiki...', NULL, 2) #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(185): SqlBagOStuff->doGet('osewiki_db-wiki...', 2) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/parser/ParserCache.php(181): BagOStuff->get('osewiki_db-wiki...', NULL, 2) #8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/parser/ParserCache.php(239): ParserCache->getKey(Object(WikiPage), Object(ParserOptions), 0) #9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/page/Article.php(527): ParserCache->get(Object(WikiPage), Object(ParserOptions)) #10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/actions/ViewAction.php(68): Article->view() #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(499): ViewAction->show() #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(293): MediaWiki->performAction(Object(Article), Object(Title)) #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(861): MediaWiki->performRequest() #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main() #15 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run() #16 {main} [SQLBagOStuff] Connection 1063179 will be used for SqlBagOStuff Parser cache options found. ParserOutput cache found. Article::view: showing parser cache contents MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: local proxy caching; Thu, 15 Mar 2018 21:00:18 GMT ** Request ended normally [session] Saving all sessions on shutdown [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'.
- I can't say why it's working now, but I can diff the debug logs to see which lines are relevant to the change, which gives a thread to debug further
# relevant debug lines when the cookie was generated, causing varnish to store a hit-for-pass [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: SqlBagOStuff, session: SqlBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. [SQLBagOStuff] Connection 1057383 will be used for SqlBagOStuff [session] SessionBackend "dd1bufqqmn58vb0ipofornsdraa2kv5r" is unsaved, marking dirty in constructor [session] SessionBackend "dd1bufqqmn58vb0ipofornsdraa2kv5r" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489537981", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489537981", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489537981", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489537981", "/", "", "", "1" [DBConnection] Connected to database 0 at 'localhost'. ... [caches] parser: SqlBagOStuff Article::view using parser cache: yes Parser cache options found. ParserOutput cache found. Article::view: showing parser cache contents MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed [cookie] setcookie: "cpPosTime", "1521073981.6965", "1521074041", "/", "", "1", "1" [DBReplication] Wikimedia\Rdbms\ChronologyProtector::shutdownLB: DB 'localhost' touched MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: private caching; Wed, 14 Mar 2018 19:33:01 GMT ** Request ended normally # relevant debug lines when the cookie was *not* generated, causing varnish to actually store the response [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: 0, message: EmptyBagOStuff, session: EmptyBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [session] SessionBackend "lug2cr578qgttdlit06p4djb0pe30pki" is unsaved, marking dirty in constructor [session] SessionBackend "lug2cr578qgttdlit06p4djb0pe30pki" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489621803", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489621803", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489621803", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489621803", "/", "", "", "1" [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. ... [caches] parser: SqlBagOStuff Article::view using parser cache: yes Parser cache options found. ParserOutput cache found. Article::view: showing parser cache contents MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: local proxy caching; Thu, 15 Mar 2018 21:00:18 GMT ** Request ended normally
- note that when it behaves, the "[SQLBagOStuff] Connection 1057383 will be used for SqlBagOStuff" line is absent
- I commented-out the "$wgSessionCacheType = CACHE_NONE;" line in LocalSettings.php, banned the cache, queried the server, and the cookie came back
- then I tried again without changing the config (ban -> query -> check varnish log), and the cookie was not set; I got a hit.
- I uncommented the line above, banned the cache, queried the server, and the cookie came back!!
- then (for the second time) I tried again without changing the config (so still uncommented) = ban -> query -> check varnish log, and the cookie was not set; I got a hit.
- this is very useful information. so the cookie is generated when there is a *change* to our config. Whether this option was true or false didn't matter; the ChronologyProtection cookie (which is intended to notify the LB that a change was recently made to the master DB) appeared whenever there was a change on the first load. Then it disappeared on subsequent loads after the cache was purged (or, I would expect, after the TTL for the hit-for-pass expired). I'm guessing this is because the config itself is stored in the database, and it's not synced until the first user comes along and makes a query following the change.
- I think my testing may be logically flawed; perhaps what I was seeing as the cookie popping up was just routine cron-like db tasks that occur every X minutes such that a well-trafficked site would infrequently trigger a hit-for-pass, but my send-a-query-every-few-minutes testing may just happen to coincide with a master db change (and therefore this CP cookie & hit-for-pass) every time I tested a query. To eliminate that risk, I'd need my testing to a rapid succession of ban, query, query, wait a few seconds, then repeat. Then only check to see if it was a hit/miss/hit-for-pass in the very last query. so I did that:
[root@hetzner2 ~]# for run in {1..3}; do varnishadm 'ban req.http.host ~ "wiki.opensourceecology.org"'; sleep 1; curl -si https://wiki.opensourceecology.org/wiki/Michael_Log 2>/dev/null | head -n 25 | grep -i 'set-cookie'; curl -si https://wiki.opensourceecology.org/wiki/Michael_Log 2>/dev/null | head -n 25 | grep -i 'set-cookie'; sleep 1; done Set-Cookie: cpPosTime=1521164025.1436; expires=Fri, 16-Mar-2018 01:34:45 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly [root@hetzner2 ~]# <pre> ## and the corresponding varnishlog output shows the hit-for-pass caused by the cookie on the first try. Then subsequent calls are miss -> hit = successful <pre> [root@hetzner2 ~]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 138.201.84.223'" | grep -Ei 'hit|pass|miss' - Debug "XXXX MISS" - VCL_call MISS - Debug "XXXX HIT-FOR-PASS" - HitPass 528631 - VCL_call PASS - Link bereq 446030 pass - Debug "XXXX MISS" - VCL_call MISS - Hit 528634 - VCL_call HIT - Debug "XXXX MISS" - VCL_call MISS - Hit 528637 - VCL_call HIT
- now I began walking-back my disabling of the caching in LocalSettings.php. I uncommented all the new lines so it now reads
#$wgMainStash = CACHE_NONE; #$wgMessageCacheType = CACHE_NONE; #$wgSessionCacheType = CACHE_NONE;
- and I ran the loop; this time I got the set-cookie on *every* call
[root@hetzner2 ~]# for run in {1..3}; do varnishadm 'ban req.http.host ~ "wiki.opensourceecology.org"'; sleep 1; curl -si https://wiki.opensourceecology.org/wiki/Michael_Log 2>/dev/null | head -n 25 | grep -i 'set-cookie'; curl -si https://wiki.opensourceecology.org/wiki/Michael_Log 2>/dev/null | head -n 25 | grep -i 'set-cookie'; sleep 1; done Set-Cookie: cpPosTime=1521164321.3098; expires=Fri, 16-Mar-2018 01:39:41 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly Set-Cookie: cpPosTime=1521164321.5951; expires=Fri, 16-Mar-2018 01:39:41 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly Set-Cookie: cpPosTime=1521164324.0022; expires=Fri, 16-Mar-2018 01:39:44 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly Set-Cookie: cpPosTime=1521164324.2661; expires=Fri, 16-Mar-2018 01:39:44 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly Set-Cookie: cpPosTime=1521164326.6755; expires=Fri, 16-Mar-2018 01:39:46 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly Set-Cookie: cpPosTime=1521164326.9391; expires=Fri, 16-Mar-2018 01:39:46 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly [root@hetzner2 ~]#
- and the varnishlog shows hit-for-pass every time too
- Debug "XXXX MISS" - VCL_call MISS - Debug "XXXX HIT-FOR-PASS" - HitPass 409347 - VCL_call PASS - Link bereq 494319 pass - Debug "XXXX MISS" - VCL_call MISS - Debug "XXXX HIT-FOR-PASS" - HitPass 409374 - VCL_call PASS - Link bereq 409381 pass - Debug "XXXX MISS" - VCL_call MISS - Debug "XXXX HIT-FOR-PASS" - HitPass 494364 - VCL_call PASS - Link bereq 409394 pass
- I isolated it specifically to the $wgMessageCacheType variable; If this is *not* set to CACHE_NONE, then the cookie is always present, causing varnish to store the hit-for-pass
- here's the debug output when the $wgMessageCacheType variable is *not* set to CACHE_NONE
[DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.29.0", "ChronologyProtection": "true" } IP: 127.0.0.1 Start request GET /wiki/Michael_Log HTTP HEADERS: X-REAL-IP: 138.201.84.223 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.29.0 ACCEPT: */* X-FORWARDED-FOR: 127.0.0.1 X-VARNISH: 494755 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: SqlBagOStuff, session: SqlBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.29.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. [SQLBagOStuff] Connection 1064033 will be used for SqlBagOStuff [session] SessionBackend "lg3t586cdk8n4q9ba89ug1su8ind99ck" is unsaved, marking dirty in constructor [session] SessionBackend "lg3t586cdk8n4q9ba89ug1su8ind99ck" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489629400", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489629400", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489629400", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489629400", "/", "", "", "1" [DBConnection] Connected to database 0 at 'localhost'. Title::getRestrictionTypes: applicable restrictions to Michael Log are {edit,move} [ContentHandler] Created handler for wikitext: WikitextContentHandler Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::checkLastModified: client did not send If-Modified-Since header [DBPerformance] Expectation (writes <= 0) by MediaWiki::main not met (actual: 1): query-m: REPLACE INTO `wiki_objectcache` (keyname,value,exptime) VALUES ('X') #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/TransactionProfiler.php(219): Wikimedia\Rdbms\TransactionProfiler->reportExpectationViolated('writes', 'query-m: REPLAC...', 1) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(1037): Wikimedia\Rdbms\TransactionProfiler->recordQueryCompletion('query-m: REPLAC...', 1521165400.3345, true, 1) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(937): Wikimedia\Rdbms\Database->doProfiledQuery('REPLACE INTO `w...', 'REPLACE /* SqlB...', true, 'SqlBagOStuff::s...') #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(2263): Wikimedia\Rdbms\Database->query('REPLACE INTO `w...', 'SqlBagOStuff::s...') #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(497): Wikimedia\Rdbms\Database->nativeReplace('objectcache', Array, 'SqlBagOStuff::s...') #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(362): Wikimedia\Rdbms\DatabaseMysqlBase->replace('objectcache', Array, Array, 'SqlBagOStuff::s...') #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(376): SqlBagOStuff->setMulti(Array, 30) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(545): SqlBagOStuff->set('osewiki_db-wiki...', 1, 30) #8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(418): BagOStuff->add('osewiki_db-wiki...', 1, 30) #9 [internal function]: BagOStuff->{closure}() #10 /var/www/html/wiki.opensourceecology.org/htdocs/vendor/wikimedia/wait-condition-loop/src/WaitConditionLoop.php(92): call_user_func(Object(Closure)) #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(429): Wikimedia\WaitConditionLoop->invoke() #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(472): BagOStuff->lock('osewiki_db-wiki...', 0, 30, 'MessageCache::g...') #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(762): BagOStuff->getScopedLock('osewiki_db-wiki...', 0, 30, 'MessageCache::g...') #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(420): MessageCache->getReentrantScopedLock('osewiki_db-wiki...', 0) #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(350): MessageCache->loadFromDBWithLock('en', Array, NULL) #16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(991): MessageCache->load('en') #17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(921): MessageCache->getMsgFromNamespace('Pagetitle', 'en') #18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(888): MessageCache->getMessageForLang(Object(LanguageEn), 'pagetitle', true, Array) #19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(829): MessageCache->getMessageFromFallbackChain(Object(LanguageEn), 'pagetitle', true) #20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(1275): MessageCache->get('pagetitle', true, Object(LanguageEn)) #21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(842): Message->fetchMessage() #22 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(934): Message->toString('text') #23 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(950): Message->text() #24 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(998): OutputPage->setHTMLTitle(Object(Message)) #25 /var/www/html/wiki.opensourceecology.org/htdocs/includes/page/Article.php(463): OutputPage->setPageTitle('Maltfield Log') #26 /var/www/html/wiki.opensourceecology.org/htdocs/includes/actions/ViewAction.php(68): Article->view() #27 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(499): ViewAction->show() #28 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(293): MediaWiki->performAction(Object(Article), Object(Title)) #29 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(861): MediaWiki->performRequest() #30 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main() #31 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run() #32 {main} [DBPerformance] Expectation (writes <= 0) by MediaWiki::main not met (actual: 2): query-m: REPLACE INTO `wiki_objectcache` (keyname,value,exptime) VALUES ('X') #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/TransactionProfiler.php(219): Wikimedia\Rdbms\TransactionProfiler->reportExpectationViolated('writes', 'query-m: REPLAC...', 2) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(1037): Wikimedia\Rdbms\TransactionProfiler->recordQueryCompletion('query-m: REPLAC...', 1521165400.3374, true, 2) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(937): Wikimedia\Rdbms\Database->doProfiledQuery('REPLACE INTO `w...', 'REPLACE /* SqlB...', true, 'SqlBagOStuff::s...') #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(2263): Wikimedia\Rdbms\Database->query('REPLACE INTO `w...', 'SqlBagOStuff::s...') #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(497): Wikimedia\Rdbms\Database->nativeReplace('objectcache', Array, 'SqlBagOStuff::s...') #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(362): Wikimedia\Rdbms\DatabaseMysqlBase->replace('objectcache', Array, Array, 'SqlBagOStuff::s...') #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(376): SqlBagOStuff->setMulti(Array, 0) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(690): SqlBagOStuff->set('osewiki_db-wiki...', Array) #8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(428): MessageCache->saveToCaches(Array, 'all', 'en') #9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(350): MessageCache->loadFromDBWithLock('en', Array, NULL) #10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(991): MessageCache->load('en') #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(921): MessageCache->getMsgFromNamespace('Pagetitle', 'en') #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(888): MessageCache->getMessageForLang(Object(LanguageEn), 'pagetitle', true, Array) #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(829): MessageCache->getMessageFromFallbackChain(Object(LanguageEn), 'pagetitle', true) #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(1275): MessageCache->get('pagetitle', true, Object(LanguageEn)) #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(842): Message->fetchMessage() #16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(934): Message->toString('text') #17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(950): Message->text() #18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(998): OutputPage->setHTMLTitle(Object(Message)) #19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/page/Article.php(463): OutputPage->setPageTitle('Maltfield Log') #20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/actions/ViewAction.php(68): Article->view() #21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(499): ViewAction->show() #22 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(293): MediaWiki->performAction(Object(Article), Object(Title)) #23 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(861): MediaWiki->performRequest() #24 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main() #25 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run() #26 {main} [DBPerformance] Expectation (writes <= 0) by MediaWiki::main not met (actual: 3): query-m: DELETE FROM `wiki_objectcache` WHERE keyname = 'X' #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/TransactionProfiler.php(219): Wikimedia\Rdbms\TransactionProfiler->reportExpectationViolated('writes', 'query-m: DELETE...', 3) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(1037): Wikimedia\Rdbms\TransactionProfiler->recordQueryCompletion('query-m: DELETE...', 1521165400.3392, true, 1) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(937): Wikimedia\Rdbms\Database->doProfiledQuery('DELETE FROM `wi...', 'DELETE /* SqlBa...', true, 'SqlBagOStuff::d...') #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(2370): Wikimedia\Rdbms\Database->query('DELETE FROM `wi...', 'SqlBagOStuff::d...') #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(433): Wikimedia\Rdbms\Database->delete('objectcache', Array, 'SqlBagOStuff::d...') #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(447): SqlBagOStuff->delete('osewiki_db-wiki...') #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(485): BagOStuff->unlock('osewiki_db-wiki...') #7 [internal function]: BagOStuff->{closure}() #8 /var/www/html/wiki.opensourceecology.org/htdocs/vendor/wikimedia/scoped-callback/src/ScopedCallback.php(76): call_user_func_array(Object(Closure), Array) #9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(350): Wikimedia\ScopedCallback->__destruct() #10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(350): MessageCache->loadFromDBWithLock('en', Array, NULL) #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(991): MessageCache->load('en') #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(921): MessageCache->getMsgFromNamespace('Pagetitle', 'en') #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(888): MessageCache->getMessageForLang(Object(LanguageEn), 'pagetitle', true, Array) #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(829): MessageCache->getMessageFromFallbackChain(Object(LanguageEn), 'pagetitle', true) #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(1275): MessageCache->get('pagetitle', true, Object(LanguageEn)) #16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(842): Message->fetchMessage() #17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(934): Message->toString('text') #18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(950): Message->text() #19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(998): OutputPage->setHTMLTitle(Object(Message)) #20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/page/Article.php(463): OutputPage->setPageTitle('Maltfield Log') #21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/actions/ViewAction.php(68): Article->view() #22 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(499): ViewAction->show() #23 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(293): MediaWiki->performAction(Object(Article), Object(Title)) #24 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(861): MediaWiki->performRequest() #25 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main() #26 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run() #27 {main} [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is expired/volatile, loading from database Unstubbing $wgParser on call of $wgParser::firstCallInit from MessageCache->getParser Parser: using preprocessor: Preprocessor_DOM Unstubbing $wgLang on call of $wgLang::_unstub from ParserOptions->__construct [caches] parser: SqlBagOStuff Article::view using parser cache: yes Parser cache options found. ParserOutput cache found. Article::view: showing parser cache contents MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed [cookie] setcookie: "cpPosTime", "1521165400.3692", "1521165460", "/", "", "1", "1" [DBReplication] Wikimedia\Rdbms\ChronologyProtector::shutdownLB: DB 'localhost' touched MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: private caching; Fri, 16 Mar 2018 01:55:42 GMT ** Request ended normally [session] Saving all sessions on shutdown [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'.
- and here's the debug output when it *is* set to CACHE_NONE.
[DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.29.0", "ChronologyProtection": "true" } IP: 127.0.0.1 Start request GET /wiki/Michael_Log HTTP HEADERS: X-REAL-IP: 138.201.84.223 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.29.0 ACCEPT: */* X-FORWARDED-FOR: 127.0.0.1 ACCEPT-ENCODING: gzip X-VARNISH: 146580 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: EmptyBagOStuff, session: SqlBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.29.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. [SQLBagOStuff] Connection 1064073 will be used for SqlBagOStuff [session] SessionBackend "nhu5kl4t72jhuu3nfcus835sjgpdfpa0" is unsaved, marking dirty in constructor [session] SessionBackend "nhu5kl4t72jhuu3nfcus835sjgpdfpa0" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489629637", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489629637", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489629637", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489629637", "/", "", "", "1" [DBConnection] Connected to database 0 at 'localhost'. Title::getRestrictionTypes: applicable restrictions to Michael Log are {edit,move} [ContentHandler] Created handler for wikitext: WikitextContentHandler Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::checkLastModified: client did not send If-Modified-Since header [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is empty, loading from database Unstubbing $wgParser on call of $wgParser::firstCallInit from MessageCache->getParser Parser: using preprocessor: Preprocessor_DOM Unstubbing $wgLang on call of $wgLang::_unstub from ParserOptions->__construct [caches] parser: SqlBagOStuff Article::view using parser cache: yes Parser cache options found. ParserOutput cache found. Article::view: showing parser cache contents MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: local proxy caching; Fri, 16 Mar 2018 01:58:47 GMT ** Request ended normally [session] Saving all sessions on shutdown [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'.
- Note the "[caches]" line that shows "message: EmptyBagOStuff" instead of "message: SqlBagOStuff"
- Note that the "[MessageCache]" line is also different
# when it is *not* set to CACHE_NONE. [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is expired/volatile, loading from database # when it *is* set to CACHE_NONE. [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is empty, loading from d =Wed Mar 14, 2018= # marked the osemain migration as successful, lifting the content freeze http://opensourceecology.org/wiki/CHG-2018-02-05 # returned to investigating the unnecessarily high hit-for-pass responses in varnish for osemain due to the misbehaving 'fundraiser' wordpress plugin that is injecting cache-control headers, telling varnish not to cache ## found that the image of the qr code for bitcoin was missing https://staging.opensourceecology.org/community/#support ## attempted to update the page using a relative path of the image, rather than "http://..." ### got a modsecurity false-positive ### whitelisted 958008, xss ### whitelisted 973329, xss ## couldn't see any other issues with the differences between the staging site (with the Fundraising plugin disabled) and the production site (with the Fundraising plugin enabled) ## confirmed that the offensive cache-control headers were still present from apache <pre> [root@hetzner2 conf.d]# curl -I "http://127.0.0.1:8000/" -H "Host: www.opensourceecology.org" HTTP/1.1 301 Moved Permanently Date: Wed, 14 Mar 2018 19:21:50 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: https://www.opensourceecology.org/ X-XSS-Protection: 1; mode=block Set-Cookie: OSESESSION=srghpko27hns1894gud5dm8r5tohv2e9hkrth94ges26qpi6pbv749m3i8rst09c9ns5ojp1i90lr0hfu2jgsernm2b2kutu7shdbg3; path=/; HttpOnly;HttpOnly Content-Type: text/html; charset=UTF-8 [root@hetzner2 conf.d]#
- disabled the Fundraising app on production
- confirmed that the offending cache-control headers were now absent
[root@hetzner2 conf.d]# curl -I "http://127.0.0.1:8000/" -H "Host: www.opensourceecology.org" HTTP/1.1 301 Moved Permanently Date: Wed, 14 Mar 2018 19:22:49 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Location: https://www.opensourceecology.org/ X-XSS-Protection: 1; mode=block Content-Type: text/html; charset=UTF-8 [root@hetzner2 conf.d]#
- continued work on integrating mediawiki with varnish
- since the rabbit hole investigating the root cause of the cache-control headers causing misses/hit-for-passes, I'm just going to ignroe the backend's requests to avoid caching per official the guide's recommendation https://www.mediawiki.org/wiki/Manual:Varnish_caching#Configuring_Varnish_4.x
- confirmed that repeated queries were not hits
[root@hetzner2 ~]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 198.7.58.245'" | grep -Ei 'hit|pass|miss' - Debug "XXXX MISS" - VCL_call MISS - VCL_return pass - VCL_call PASS - Link bereq 5180433 pass - VCL_return pass - VCL_call PASS - Link bereq 5180436 pass - VCL_return pass - VCL_call PASS - Link bereq 4632277 pass - VCL_return pass - VCL_call PASS - Link bereq 5180444 pass
- updated the varnish config for our wiki site, adding the vcl logic from the official guide linked above to functions: vcl_recv, vcl_backend_response, vcl_pipe, vcl_hit, and vcl_miss
- had to remove the line "set req.backend_hint= default;" from vcl_recv as we already do this in our structured multi-vhost config after checking the req.http.host in every function
- reloaded varnish config
- refreshed a wiki page a couple times & watched the varnishlog output, but it was the same result. no hit.
[root@hetzner2 ~]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 198.7.58.245'" | grep -Ei 'hit|pass|miss' - Debug "XXXX HIT-FOR-PASS" - HitPass 2455036 - VCL_call PASS - Link bereq 5245753 pass - VCL_return pass - VCL_call PASS - Link bereq 4633208 pass - VCL_return pass - VCL_call PASS - Link bereq 4633211 pass - VCL_return pass - VCL_call PASS - Link bereq 5245756 pass - Debug "XXXX MISS" - VCL_call MISS - Debug "XXXX MISS" - VCL_call MISS
- got the full output of the varnishlog
[root@hetzner2 ~]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 198.7.58.245'" * << Request >> 2455213 - Begin req 2455212 rxreq - Timestamp Start: 1521058112.833068 0.000000 0.000000 - Timestamp Req: 1521058112.833068 0.000000 0.000000 - ReqStart 127.0.0.1 38790 - ReqMethod GET - ReqURL /wiki/Maltfield_Log - ReqProtocol HTTP/1.0 - ReqHeader X-Real-IP: 198.7.58.245 - ReqHeader X-Forwarded-For: 198.7.58.245 - ReqHeader X-Forwarded-Proto: https - ReqHeader X-Forwarded-Port: 443 - ReqHeader Host: wiki.opensourceecology.org - ReqHeader Connection: close - ReqHeader User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader Accept-Language: en-US,en;q=0.5 - ReqHeader Accept-Encoding: gzip, deflate, br - ReqHeader Cookie: cpPosTime=1521058094.075 - ReqHeader DNT: 1 - ReqHeader Upgrade-Insecure-Requests: 1 - ReqUnset X-Forwarded-For: 198.7.58.245 - ReqHeader X-Forwarded-For: 198.7.58.245, 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 198.7.58.245, 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1 - ReqUnset Accept-Encoding: gzip, deflate, br - ReqHeader Accept-Encoding: gzip - VCL_return hash - VCL_call HASH - VCL_return lookup - Debug "XXXX MISS" - VCL_call MISS - VCL_return fetch - Link bereq 2455214 fetch - Timestamp Fetch: 1521058113.024141 0.191072 0.191072 - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Wed, 14 Mar 2018 20:08:32 GMT - RespHeader Server: Apache - RespHeader X-Content-Type-Options: nosniff - RespHeader Content-language: en - RespHeader X-UA-Compatible: IE=Edge - RespHeader Link: </images/ose-logo.png?be82f>;rel=preload;as=image - RespHeader Vary: Accept-Encoding,Cookie - RespHeader Expires: Thu, 01 Jan 1970 00:00:00 GMT - RespHeader Cache-Control: private, must-revalidate, max-age=0 - RespHeader Last-Modified: Wed, 14 Mar 2018 15:08:32 GMT - RespHeader X-XSS-Protection: 1; mode=block - RespHeader Set-Cookie: cpPosTime=1521058112.9626; expires=Wed, 14-Mar-2018 20:09:32 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly - RespHeader Content-Type: text/html; charset=UTF-8 - RespHeader X-Varnish: 2455213 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1521058113.024168 0.191099 0.000027 - RespStatus 200 - RespReason OK - RespHeader Date: Wed, 14 Mar 2018 20:08:32 GMT - RespHeader Server: Apache - RespHeader X-Content-Type-Options: nosniff - RespHeader Content-language: en - RespHeader X-UA-Compatible: IE=Edge - RespHeader Link: </images/ose-logo.png?be82f>;rel=preload;as=image - RespHeader Vary: Accept-Encoding,Cookie - RespHeader Expires: Thu, 01 Jan 1970 00:00:00 GMT - RespHeader Cache-Control: private, must-revalidate, max-age=0 - RespHeader Last-Modified: Wed, 14 Mar 2018 15:08:32 GMT - RespHeader X-XSS-Protection: 1; mode=block - RespHeader Set-Cookie: cpPosTime=1521058112.9626; expires=Wed, 14-Mar-2018 20:09:32 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly - RespHeader Content-Type: text/html; charset=UTF-8 - RespHeader X-Varnish: 2455213 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1521058113.024168 0.191099 0.000027 - Debug "RES_MODE 4" - RespHeader Connection: close - RespHeader Accept-Ranges: bytes - Timestamp Resp: 1521058113.038627 0.205559 0.014459 - Debug "XXX REF 2" - ReqAcct 490 0 490 666 16114 16780 - End [root@hetzner2 ~]#
- the varnish book shows that the hit-for-pass is triggered when setting 'beresp.uncacheable' to 'true' https://book.varnish-software.com/4.0/chapters/VCL_Basics.html
- the mediawiki config I adopted from their manual does this in several instances in vcl_backend_response()
sub vcl_backend_response { if ( beresp.backend.name == "wiki_opensourceecology_org" ){ # set minimum timeouts to auto-discard stored objects set beresp.grace = 120s; if (beresp.ttl < 48h) { set beresp.ttl = 48h; } if (!beresp.ttl > 0s) { set beresp.uncacheable = true; return (deliver); } if (beresp.http.Set-Cookie) { set beresp.uncacheable = true; return (deliver); } # if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)") { # set beresp.uncacheable = true; # return (deliver); # } if (beresp.http.Authorization && !beresp.http.Cache-Control ~ "public") { set beresp.uncacheable = true; return (deliver); } return (deliver); } }
- I tried commenting-out all 4x if blocks, reloaded varnish, and the issue still persisted
- I read the definition of hit-for-pass from the varnish book, and it makes an example of responses that should not be cached as those with set-cookie headers. This makes sense https://book.varnish-software.com/4.0/chapters/VCL_Subroutines.html#hit-for-pass
- saw that our request does include a set-cookie header
Set-Cookie: cpPosTime=1521072364.1071; expires=Thu, 15-Mar-2018 00:07:04 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly
- a quick google shows cpPosTime was added to MediaWiki 1.28; its release notes say this about it https://www.mediawiki.org/wiki/MediaWiki_1.28
After a client performs an action which alters a database that has replica databases, MediaWiki will wait for the replica databases to synchronize with the master database while it renders the HTML output. However, if the output is a redirect to another wiki on the wiki farm with a different domain, MediaWiki will instead alter the redirect URL to include a ?cpPosTime parameter that triggers the database synchronization when the URL is followed by the client. The same-domain case uses a new cpPosTime cookie.
- umm, we only have 1 fucking database.
- cpPosTime appears to be better described in its purpose/use by load balancers in the MediaWiki_architecture article
https://www.mediawiki.org/wiki/Manual:MediaWiki_architecture
MediaWiki also has built-in support for load balancing, added as early as 2004 in MediaWiki 1.2 (when Wikipedia got its second server a big deal at the time). The load balancer (MediaWiki's PHP code that decides which server to connect to) is now a critical part of Wikimedia's infrastructure, which explains its influence on some algorithm decisions in the code. The system administrator can specify in MediaWiki's configuration that there is one master database server, and any number of slave database servers; a weight can be assigned to each server. The load balancer will send all writes to the master, and will balance reads according to the weights. It also keeps track of the replication lag of each slave. If a slave's replication lag exceeds 30 seconds, it will not receive any read queries to allow it to catch up; if all slaves are lagged more than 30 seconds, MediaWiki will automatically put itself in read-only mode.
MediaWiki's "chronology protector" ensures that replication lag never causes a user to see a page that claims an action they've just performed hasn't happened yet. This is done by storing the master's position in the user's session if a request they made resulted in a write query. The next time the user makes a read request, the load balancer read this position from the session, and tries to select a slave that has caught up to that replication position to serve the request. If none is available, it will wait until one is. It may appear to other users as though the action hasn't happened yet, but the chronology remains consistent for each user.
- that makes more sense, but it still doesn't apply to us. why are we setting this cookie? also, why is my simple curl of my log making a 'write' action to the db?
curl -I https://wiki.opensourceecology.org/wiki/Maltfield_Log
- grepped through the mediawiki sourcecode in the includes dir, and saw that 'includes/MediaWiki.php' was setting this cpPosTime cookie
[root@hetzner2 htdocs]# grep -irC10 'cppostime' includes/ ... includes/MediaWiki.php- $output->getRedirect() && includes/MediaWiki.php- $lbFactory->hasOrMadeRecentMasterChanges( INF ) includes/MediaWiki.php- ) ? self::getUrlDomainDistance( $output->getRedirect() ) : false; includes/MediaWiki.php- includes/MediaWiki.php- $allowHeaders = !( $output->isDisabled() || headers_sent() ); includes/MediaWiki.php- if ( $urlDomainDistance === 'local' || $urlDomainDistance === 'remote' ) { includes/MediaWiki.php- // OutputPage::output() will be fast; $postCommitWork will not be useful for includes/MediaWiki.php- // masking the latency of syncing DB positions accross all datacenters synchronously. includes/MediaWiki.php- // Instead, make use of the RTT time of the client follow redirects. includes/MediaWiki.php- $flags = $lbFactory::SHUTDOWN_CHRONPROT_ASYNC; includes/MediaWiki.php: $cpPosTime = microtime( true ); includes/MediaWiki.php- // Client's next request should see 1+ positions with this DBMasterPos::asOf() time includes/MediaWiki.php- if ( $urlDomainDistance === 'local' && $allowHeaders ) { includes/MediaWiki.php- // Client will stay on this domain, so set an unobtrusive cookie includes/MediaWiki.php- $expires = time() + ChronologyProtector::POSITION_TTL; includes/MediaWiki.php- $options = [ 'prefix' => '' ]; includes/MediaWiki.php: $request->response()->setCookie( 'cpPosTime', $cpPosTime, $expires, $options ); includes/MediaWiki.php- } else { includes/MediaWiki.php- // Cookies may not work across wiki domains, so use a URL parameter includes/MediaWiki.php- $safeUrl = $lbFactory->appendPreShutdownTimeAsQuery( includes/MediaWiki.php- $output->getRedirect(), includes/MediaWiki.php: $cpPosTime includes/MediaWiki.php- ); includes/MediaWiki.php- $output->redirect( $safeUrl ); includes/MediaWiki.php- } includes/MediaWiki.php- } else { includes/MediaWiki.php- // OutputPage::output() is fairly slow; run it in $postCommitWork to mask includes/MediaWiki.php- // the latency of syncing DB positions accross all datacenters synchronously includes/MediaWiki.php- $flags = $lbFactory::SHUTDOWN_CHRONPROT_SYNC; includes/MediaWiki.php- if ( $lbFactory->hasOrMadeRecentMasterChanges( INF ) && $allowHeaders ) { includes/MediaWiki.php: $cpPosTime = microtime( true ); includes/MediaWiki.php- // Set a cookie in case the DB position store cannot sync accross datacenters. includes/MediaWiki.php- // This will at least cover the common case of the user staying on the domain. includes/MediaWiki.php- $expires = time() + ChronologyProtector::POSITION_TTL; includes/MediaWiki.php- $options = [ 'prefix' => '' ]; includes/MediaWiki.php: $request->response()->setCookie( 'cpPosTime', $cpPosTime, $expires, $options ); includes/MediaWiki.php- } includes/MediaWiki.php- } includes/MediaWiki.php- // Record ChronologyProtector positions for DBs affected in this request at this point includes/MediaWiki.php- $lbFactory->shutdown( $flags, $postCommitWork ); includes/MediaWiki.php- wfDebug( METHOD . ': LBFactory shutdown completed' ); includes/MediaWiki.php- includes/MediaWiki.php- // Set a cookie to tell all CDN edge nodes to "stick" the user to the DC that handles this includes/MediaWiki.php- // POST request (e.g. the "master" data center). Also have the user briefly bypass CDN so includes/MediaWiki.php- // ChronologyProtector works for cacheable URLs. includes/MediaWiki.php- if ( $request->wasPosted() && $lbFactory->hasOrMadeRecentMasterChanges() ) {
- quick sanity check: what does the prod site do? I confirmed that it does *not* set this cookie. hmm. But could it be because Cloud Front is altering it? Damn middle boxes obfuscating stuff *shrug*
user@personal:~$ curl -I http://opensourceecology.org/wiki/Maltfield_Log HTTP/1.1 200 OK Date: Thu, 15 Mar 2018 00:28:59 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Set-Cookie: __cfduid=d9d55ba85a13ac6b64a950ed80e0987b81521073738; expires=Fri, 15-Mar-19 00:28:58 GMT; path=/; domain=.opensourceecology.org; HttpOnly X-Powered-By: PHP/5.4.45 X-Content-Type-Options: nosniff Vary: Accept-Encoding,Cookie Expires: Thu, 01 Jan 1970 00:00:00 GMT Cache-Control: private, must-revalidate, max-age=0 Content-Language: en Last-Modified: Mon, 12 Mar 2018 19:04:00 GMT Server: cloudflare CF-RAY: 3fbadcf3e1289f60-IAD user@personal:~$
- checked the debug log file and found some relevant info
- entire log
- checked the debug log file and found some relevant info
IP: 127.0.0.1 Start request HEAD /wiki/Maltfield_Log HTTP HEADERS: X-REAL-IP: 198.7.58.245 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.38.0 ACCEPT: */* X-FORWARDED-FOR: 127.0.0.1 X-VARNISH: 4637894 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: SqlBagOStuff, session: SqlBagOStuff [caches] LocalisationCache: using store LCStoreDB [CryptRand] openssl_random_pseudo_bytes generated 20 bytes of strong randomness. [CryptRand] 0 bytes of randomness leftover in the buffer. [DBReplication] Wikimedia\Rdbms\LBFactory::getChronologyProtector: using request info { "IPAddress": "127.0.0.1", "UserAgent": "curl\/7.38.0", "ChronologyProtection": false } [DBConnection] Wikimedia\Rdbms\LoadBalancer::openConnection: calling initLB() before first connection. [DBConnection] Connected to database 0 at 'localhost'. [SQLBagOStuff] Connection 1057383 will be used for SqlBagOStuff [session] SessionBackend "dd1bufqqmn58vb0ipofornsdraa2kv5r" is unsaved, marking dirty in constructor [session] SessionBackend "dd1bufqqmn58vb0ipofornsdraa2kv5r" save: dataDirty=1 metaDirty=1 forcePersist=0 [cookie] already deleted setcookie: "osewiki_db_wiki__session", "", "1489537981", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_UserID", "", "1489537981", "/", "", "1", "1" [cookie] already deleted setcookie: "osewiki_db_wiki_Token", "", "1489537981", "/", "", "1", "1" [cookie] already deleted setcookie: "forceHTTPS", "", "1489537981", "/", "", "", "1" [DBConnection] Connected to database 0 at 'localhost'. Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} [ContentHandler] Created handler for wikitext: WikitextContentHandler OutputPage::checkLastModified: client did not send If-Modified-Since header [DBPerformance] Expectation (writes <= 0) by MediaWiki::main not met (actual: 1): query-m: REPLACE INTO `wiki_objectcache` (keyname,value,exptime) VALUES ('X') #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/TransactionProfiler.php(219): Wikimedia\Rdbms\TransactionProfiler->reportExpectationViolated('writes', 'query-m: REPLAC...', 1) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(1037): Wikimedia\Rdbms\TransactionProfiler->recordQueryCompletion('query-m: REPLAC...', 1521073981.6619, true, 1) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(937): Wikimedia\Rdbms\Database->doProfiledQuery('REPLACE INTO `w...', 'REPLACE /* SqlB...', true, 'SqlBagOStuff::s...') #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(2263): Wikimedia\Rdbms\Database->query('REPLACE INTO `w...', 'SqlBagOStuff::s...') #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(497): Wikimedia\Rdbms\Database->nativeReplace('objectcache', Array, 'SqlBagOStuff::s...') #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(362): Wikimedia\Rdbms\DatabaseMysqlBase->replace('objectcache', Array, Array, 'SqlBagOStuff::s...') #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(376): SqlBagOStuff->setMulti(Array, 30) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(545): SqlBagOStuff->set('osewiki_db-wiki...', 1, 30) #8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(418): BagOStuff->add('osewiki_db-wiki...', 1, 30) #9 [internal function]: BagOStuff->{closure}() #10 /var/www/html/wiki.opensourceecology.org/htdocs/vendor/wikimedia/wait-condition-loop/src/WaitConditionLoop.php(92): call_user_func(Object(Closure)) #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(429): Wikimedia\WaitConditionLoop->invoke() #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(472): BagOStuff->lock('osewiki_db-wiki...', 0, 30, 'MessageCache::g...') #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(762): BagOStuff->getScopedLock('osewiki_db-wiki...', 0, 30, 'MessageCache::g...') #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(420): MessageCache->getReentrantScopedLock('osewiki_db-wiki...', 0) #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(350): MessageCache->loadFromDBWithLock('en', Array, NULL) #16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(991): MessageCache->load('en') #17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(921): MessageCache->getMsgFromNamespace('Pagetitle', 'en') #18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(888): MessageCache->getMessageForLang(Object(LanguageEn), 'pagetitle', true, Array) #19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(829): MessageCache->getMessageFromFallbackChain(Object(LanguageEn), 'pagetitle', true) #20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(1275): MessageCache->get('pagetitle', true, Object(LanguageEn)) #21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(842): Message->fetchMessage() #22 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(934): Message->toString('text') #23 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(950): Message->text() #24 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(998): OutputPage->setHTMLTitle(Object(Message)) #25 /var/www/html/wiki.opensourceecology.org/htdocs/includes/page/Article.php(463): OutputPage->setPageTitle('Maltfield Log') #26 /var/www/html/wiki.opensourceecology.org/htdocs/includes/actions/ViewAction.php(68): Article->view() #27 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(499): ViewAction->show() #28 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(293): MediaWiki->performAction(Object(Article), Object(Title)) #29 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(851): MediaWiki->performRequest() #30 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main() #31 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run() #32 {main} [DBPerformance] Expectation (writes <= 0) by MediaWiki::main not met (actual: 2): query-m: REPLACE INTO `wiki_objectcache` (keyname,value,exptime) VALUES ('X') #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/TransactionProfiler.php(219): Wikimedia\Rdbms\TransactionProfiler->reportExpectationViolated('writes', 'query-m: REPLAC...', 2) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(1037): Wikimedia\Rdbms\TransactionProfiler->recordQueryCompletion('query-m: REPLAC...', 1521073981.6661, true, 2) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(937): Wikimedia\Rdbms\Database->doProfiledQuery('REPLACE INTO `w...', 'REPLACE /* SqlB...', true, 'SqlBagOStuff::s...') #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(2263): Wikimedia\Rdbms\Database->query('REPLACE INTO `w...', 'SqlBagOStuff::s...') #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/DatabaseMysqlBase.php(497): Wikimedia\Rdbms\Database->nativeReplace('objectcache', Array, 'SqlBagOStuff::s...') #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(362): Wikimedia\Rdbms\DatabaseMysqlBase->replace('objectcache', Array, Array, 'SqlBagOStuff::s...') #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(376): SqlBagOStuff->setMulti(Array, 0) #7 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(690): SqlBagOStuff->set('osewiki_db-wiki...', Array) #8 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(428): MessageCache->saveToCaches(Array, 'all', 'en') #9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(350): MessageCache->loadFromDBWithLock('en', Array, NULL) #10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(991): MessageCache->load('en') #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(921): MessageCache->getMsgFromNamespace('Pagetitle', 'en') #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(888): MessageCache->getMessageForLang(Object(LanguageEn), 'pagetitle', true, Array) #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(829): MessageCache->getMessageFromFallbackChain(Object(LanguageEn), 'pagetitle', true) #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(1275): MessageCache->get('pagetitle', true, Object(LanguageEn)) #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(842): Message->fetchMessage() #16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(934): Message->toString('text') #17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(950): Message->text() #18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(998): OutputPage->setHTMLTitle(Object(Message)) #19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/page/Article.php(463): OutputPage->setPageTitle('Maltfield Log') #20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/actions/ViewAction.php(68): Article->view() #21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(499): ViewAction->show() #22 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(293): MediaWiki->performAction(Object(Article), Object(Title)) #23 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(851): MediaWiki->performRequest() #24 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main() #25 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run() #26 {main} [DBPerformance] Expectation (writes <= 0) by MediaWiki::main not met (actual: 3): query-m: DELETE FROM `wiki_objectcache` WHERE keyname = 'X' #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/TransactionProfiler.php(219): Wikimedia\Rdbms\TransactionProfiler->reportExpectationViolated('writes', 'query-m: DELETE...', 3) #1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(1037): Wikimedia\Rdbms\TransactionProfiler->recordQueryCompletion('query-m: DELETE...', 1521073981.668, true, 1) #2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(937): Wikimedia\Rdbms\Database->doProfiledQuery('DELETE FROM `wi...', 'DELETE /* SqlBa...', true, 'SqlBagOStuff::d...') #3 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php(2370): Wikimedia\Rdbms\Database->query('DELETE FROM `wi...', 'SqlBagOStuff::d...') #4 /var/www/html/wiki.opensourceecology.org/htdocs/includes/objectcache/SqlBagOStuff.php(433): Wikimedia\Rdbms\Database->delete('objectcache', Array, 'SqlBagOStuff::d...') #5 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(447): SqlBagOStuff->delete('osewiki_db-wiki...') #6 /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/objectcache/BagOStuff.php(485): BagOStuff->unlock('osewiki_db-wiki...') #7 [internal function]: BagOStuff->{closure}() #8 /var/www/html/wiki.opensourceecology.org/htdocs/vendor/wikimedia/scoped-callback/src/ScopedCallback.php(76): call_user_func_array(Object(Closure), Array) #9 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(350): Wikimedia\ScopedCallback->__destruct() #10 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(350): MessageCache->loadFromDBWithLock('en', Array, NULL) #11 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(991): MessageCache->load('en') #12 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(921): MessageCache->getMsgFromNamespace('Pagetitle', 'en') #13 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(888): MessageCache->getMessageForLang(Object(LanguageEn), 'pagetitle', true, Array) #14 /var/www/html/wiki.opensourceecology.org/htdocs/includes/cache/MessageCache.php(829): MessageCache->getMessageFromFallbackChain(Object(LanguageEn), 'pagetitle', true) #15 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(1275): MessageCache->get('pagetitle', true, Object(LanguageEn)) #16 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(842): Message->fetchMessage() #17 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Message.php(934): Message->toString('text') #18 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(950): Message->text() #19 /var/www/html/wiki.opensourceecology.org/htdocs/includes/OutputPage.php(998): OutputPage->setHTMLTitle(Object(Message)) #20 /var/www/html/wiki.opensourceecology.org/htdocs/includes/page/Article.php(463): OutputPage->setPageTitle('Maltfield Log') #21 /var/www/html/wiki.opensourceecology.org/htdocs/includes/actions/ViewAction.php(68): Article->view() #22 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(499): ViewAction->show() #23 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(293): MediaWiki->performAction(Object(Article), Object(Title)) #24 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(851): MediaWiki->performRequest() #25 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main() #26 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run() #27 {main} [MessageCache] MessageCache::load: Loading en... local cache is empty, global cache is expired/volatile, loading from database Unstubbing $wgParser on call of $wgParser::firstCallInit from MessageCache->getParser Parser: using preprocessor: Preprocessor_DOM Unstubbing $wgLang on call of $wgLang::_unstub from ParserOptions->__construct [caches] parser: SqlBagOStuff Article::view using parser cache: yes Parser cache options found. ParserOutput cache found. Article::view: showing parser cache contents MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed [cookie] setcookie: "cpPosTime", "1521073981.6965", "1521074041", "/", "", "1", "1" [DBReplication] Wikimedia\Rdbms\ChronologyProtector::shutdownLB: DB 'localhost' touched MediaWiki::preOutputCommit: LBFactory shutdown completed Title::getRestrictionTypes: applicable restrictions to Maltfield Log are {edit,move} OutputPage::haveCacheVaryCookies: no cache-varying cookies found OutputPage::sendCacheControl: private caching; Wed, 14 Mar 2018 19:33:01 GMT ** Request ended normally [session] Saving all sessions on shutdown [DBConnection] Closing connection to database 'localhost'. [DBConnection] Closing connection to database 'localhost'.
- relevant lines
MediaWiki::preOutputCommit: primary transaction round committed MediaWiki::preOutputCommit: pre-send deferred updates completed [cookie] setcookie: "cpPosTime", "1521073981.6965", "1521074041", "/", "", "1", "1" [DBReplication] Wikimedia\Rdbms\ChronologyProtector::shutdownLB: DB 'localhost' touched
- this is becoming another rabbit hole, and a quick check confirmed that this is not the issue. I added a vcl line to delete the backend's set-cookie line
unset beresp.http.Set-Cookie;
- and I confirmed that cookie line was now absent
user@personal:~$ curl -I https://wiki.opensourceecology.org/wiki/Maltfield_Log HTTP/1.1 200 OK Server: nginx Date: Thu, 15 Mar 2018 00:53:51 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 0 Connection: keep-alive X-Content-Type-Options: nosniff Content-language: en X-UA-Compatible: IE=Edge Link: </images/ose-logo.png?be82f>;rel=preload;as=image Vary: Accept-Encoding,Cookie Expires: Thu, 01 Jan 1970 00:00:00 GMT Cache-Control: private, must-revalidate, max-age=0 Last-Modified: Thu, 15 Mar 2018 00:46:25 GMT X-XSS-Protection: 1; mode=block X-Fuck: Yeah X-Varnish: 5280664 Age: 0 Via: 1.1 varnish-v4 Strict-Transport-Security: max-age=15552001 Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report" user@personal:~$
- but varnish still did a hit-for-pass
[root@hetzner2 ~]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 198.7.58.245'" | grep -Ei 'hit|pass|miss' - Debug "XXXX HIT-FOR-PASS" - HitPass 2455036 - VCL_call PASS - Link bereq 5154925 pass
- hmm...I just realized that by using the '-I' argument to curl, I don't merely tell curl to output only the headers, but I'm sending a HEAD request to the server instead of a GET request.
- started using `curl -si ... | head` instead
user@personal:~$ curl -si https://wiki.opensourceecology.org/wiki/Maltfield_Log 2>&1 | head -n 25 HTTP/1.1 200 OK Server: nginx Date: Thu, 15 Mar 2018 01:15:22 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive X-Content-Type-Options: nosniff Content-language: en X-UA-Compatible: IE=Edge Link: </images/ose-logo.png?be82f>;rel=preload;as=image Vary: Accept-Encoding,Cookie X-XSS-Protection: 1; mode=block X-Fuck: Yeah X-Varnish: 5281396 Age: 0 Via: 1.1 varnish-v4 Accept-Ranges: bytes Strict-Transport-Security: max-age=15552001 Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report" 1<!DOCTYPE html> <html class="client-nojs" lang="en" dir="ltr"> <head> <meta charset="UTF-8"/> <title>Maltfield Log - Open Source Ecology</title> user@personal:~$
- this produces still produces a hit-for-pass, but the ReqMethod is a GET again
* << Request >> 1923444 - Begin req 1923443 rxreq - Timestamp Start: 1521076586.730409 0.000000 0.000000 - Timestamp Req: 1521076586.730409 0.000000 0.000000 - ReqStart 127.0.0.1 57422 - ReqMethod GET - ReqURL /wiki/Maltfield_Log - ReqProtocol HTTP/1.0 - ReqHeader X-Real-IP: 198.7.58.245 - ReqHeader X-Forwarded-For: 198.7.58.245 - ReqHeader X-Forwarded-Proto: https - ReqHeader X-Forwarded-Port: 443 - ReqHeader Host: wiki.opensourceecology.org - ReqHeader Connection: close - ReqHeader User-Agent: curl/7.38.0 - ReqHeader Accept: */* - ReqUnset X-Forwarded-For: 198.7.58.245 - ReqHeader X-Forwarded-For: 198.7.58.245, 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 198.7.58.245, 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_return hash - VCL_call HASH - VCL_return lookup - Debug "XXXX HIT-FOR-PASS" - HitPass 2455036 - VCL_call PASS - VCL_return fetch - Link bereq 1923445 pass - Timestamp Fetch: 1521076586.923748 0.193339 0.193339 - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Thu, 15 Mar 2018 01:16:26 GMT - RespHeader Server: Apache - RespHeader X-Content-Type-Options: nosniff - RespHeader Content-language: en - RespHeader X-UA-Compatible: IE=Edge - RespHeader Link: </images/ose-logo.png?be82f>;rel=preload;as=image - RespHeader Vary: Accept-Encoding,Cookie - RespHeader X-XSS-Protection: 1; mode=block - RespHeader Content-Type: text/html; charset=UTF-8 - RespHeader X-Fuck: Yeah - RespHeader X-Varnish: 1923444 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1521076586.923774 0.193365 0.000026 - Debug "RES_MODE 4" - RespHeader Connection: close - RespHeader Accept-Ranges: bytes - Timestamp Resp: 1521076586.938319 0.207910 0.014545 - Debug "XXX REF 1" - ReqAcct 232 0 232 417 16115 16532 - End
- I figured out what was happening. It's clear from the above output if you follow it chronologically. First, vcl_recv is called
- VCL_call RECV - ReqUnset X-Forwarded-For: 198.7.58.245, 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_return hash
- vcl_recv returned 'hash', so that vcl_hash subroutine is called. we don't return anything in ours, so the default varnish subroutine is executed, which simply returns 'lookup'
- VCL_call HASH - VCL_return lookup
- the lookup then returns what's in the cache. And there is an object in the cache. That object is a special 'hit-for-cache' object !! So it has nothing to do with our backend response; we encountered hit-for-pass before we even called the backend! The backend is called when fetch is returned, which calls the vcl_backend_fetch() subroutine
- VCL_return lookup - Debug "XXXX HIT-FOR-PASS" - HitPass 2455036 - VCL_call PASS - VCL_return fetch
- I confirmed the above when I did a `varnish restart`, and the next 2 subsequent GETs resulted in a MISS followed by a HIT
* << Request >> 2 - Begin req 1 rxreq - Timestamp Start: 1521077596.856872 0.000000 0.000000 - Timestamp Req: 1521077596.856872 0.000000 0.000000 - ReqStart 127.0.0.1 59084 - ReqMethod GET - ReqURL /wiki/Maltfield_Log - ReqProtocol HTTP/1.0 - ReqHeader X-Real-IP: 198.7.58.245 - ReqHeader X-Forwarded-For: 198.7.58.245 - ReqHeader X-Forwarded-Proto: https - ReqHeader X-Forwarded-Port: 443 - ReqHeader Host: wiki.opensourceecology.org - ReqHeader Connection: close - ReqHeader User-Agent: curl/7.38.0 - ReqHeader Accept: */* - ReqUnset X-Forwarded-For: 198.7.58.245 - ReqHeader X-Forwarded-For: 198.7.58.245, 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 198.7.58.245, 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_return hash - VCL_call HASH - VCL_return lookup - Debug "XXXX MISS" - VCL_call MISS - VCL_return fetch - Link bereq 3 fetch - Timestamp Fetch: 1521077597.055543 0.198670 0.198670 - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Thu, 15 Mar 2018 01:33:16 GMT - RespHeader Server: Apache - RespHeader X-Content-Type-Options: nosniff - RespHeader Content-language: en - RespHeader X-UA-Compatible: IE=Edge - RespHeader Link: </images/ose-logo.png?be82f>;rel=preload;as=image - RespHeader Vary: Accept-Encoding,Cookie - RespHeader X-XSS-Protection: 1; mode=block - RespHeader Content-Type: text/html; charset=UTF-8 - RespHeader X-Fuck: Yeah - RespHeader X-Varnish: 2 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1521077597.055605 0.198733 0.000063 - Debug "RES_MODE 4" - RespHeader Connection: close - RespHeader Accept-Ranges: bytes - Timestamp Resp: 1521077597.070207 0.213335 0.014602 - Debug "XXX REF 2" - ReqAcct 232 0 232 411 16115 16526 - End * << Request >> 5 - Begin req 4 rxreq - Timestamp Start: 1521077609.811064 0.000000 0.000000 - Timestamp Req: 1521077609.811064 0.000000 0.000000 - ReqStart 127.0.0.1 59088 - ReqMethod GET - ReqURL /wiki/Maltfield_Log - ReqProtocol HTTP/1.0 - ReqHeader X-Real-IP: 198.7.58.245 - ReqHeader X-Forwarded-For: 198.7.58.245 - ReqHeader X-Forwarded-Proto: https - ReqHeader X-Forwarded-Port: 443 - ReqHeader Host: wiki.opensourceecology.org - ReqHeader Connection: close - ReqHeader User-Agent: curl/7.38.0 - ReqHeader Accept: */* - ReqUnset X-Forwarded-For: 198.7.58.245 - ReqHeader X-Forwarded-For: 198.7.58.245, 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 198.7.58.245, 127.0.0.1 - ReqHeader X-Forwarded-For: 127.0.0.1 - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 3 - VCL_call HIT - VCL_return deliver - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Thu, 15 Mar 2018 01:33:16 GMT - RespHeader Server: Apache - RespHeader X-Content-Type-Options: nosniff - RespHeader Content-language: en - RespHeader X-UA-Compatible: IE=Edge - RespHeader Link: </images/ose-logo.png?be82f>;rel=preload;as=image - RespHeader Vary: Accept-Encoding,Cookie - RespHeader X-XSS-Protection: 1; mode=block - RespHeader Content-Type: text/html; charset=UTF-8 - RespHeader X-Fuck: Yeah - RespHeader X-Varnish: 5 3 - RespHeader Age: 13 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1521077609.811101 0.000037 0.000037 - RespHeader Content-Length: 16115 - Debug "RES_MODE 2" - RespHeader Connection: close - RespHeader Accept-Ranges: bytes - Timestamp Resp: 1521077609.811124 0.000059 0.000023 - Debug "XXX REF 2" - ReqAcct 232 0 232 437 16115 16552 - End
- so apparently my calls to clear the cache using the ban command were not clearing hit-for-pass objects from the cache
[root@hetzner2 maintenance]# varnishd -Cf /etc/varnish/default.vcl && service varnish reload && varnishadm 'ban req.url ~ "wiki.opensourceecology.org"'
- further confirmation shows that the "req.url ~ ..." match fails
- I found an alternative that does work by matching req.http.host
varnishadm 'ban req.http.host ~ "wiki.opensourceecology.org"'
- updated our documentation on how to clear vhost-specific caches in varnish Web_server_configuration#Varnish
- now that I successfully got a hit, I began undoing my varnish config hacking until I could reproduce the hit-for-pass & isolate the source
- isolated it to this block; when I comment it out, I can get a miss->hit. when I leave it uncommented, I get a miss->hit-for-pass
# if (beresp.http.Set-Cookie) { # set beresp.uncacheable = true; # return (deliver); # }
- that confirms our above assumption that the cpPosTime cookie being set by the backend response was causing varnish to do the hit-for-pass. jumping back down that rabbit hole...
- the cpPosTime logic is inside 'includes/MediaWiki.php' within the function preOutputCommit()
/** * This function commits all DB changes as needed before * the user can receive a response (in case commit fails) * @param IContextSource $context * @param callable $postCommitWork [default: null] * @since 1.27 */ public static function preOutputCommit( IContextSource $context, callable $postCommitWork = null ) {
- there's 2x instances of the call to setting this cookie in this function; I isolated it to the second one (the error_log() is mine for debugging)
if ( $urlDomainDistance === 'local' || $urlDomainDistance === 'remote' ) { ... } else { // OutputPage::output() is fairly slow; run it in $postCommitWork to mask // the latency of syncing DB positions accross all datacenters synchronously $flags = $lbFactory::SHUTDOWN_CHRONPROT_SYNC; if ( $lbFactory->hasOrMadeRecentMasterChanges( INF ) && $allowHeaders ) { $cpPosTime = microtime( true ); // Set a cookie in case the DB position store cannot sync accross datacenters. // This will at least cover the common case of the user staying on the domain. $expires = time() + ChronologyProtector::POSITION_TTL; $options = [ 'prefix' => '' ]; error_log( "about to set cpPosTime cookie in spot 2" ); $request->response()->setCookie( 'cpPosTime', $cpPosTime, $expires, $options ); } }
- a stack trace shows that this function was called by index.php's last line calling "$mediaWiki->run();" -> includes/MediaWiki.php:874 = "$this->doPreOutputCommit( $outputWork );" from within private function main()
[Thu Mar 15 03:39:03.257652 2018] [:error] [pid 25129] [client 127.0.0.1:55868] #0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(571): MediaWiki::preOutputCommit(Object(RequestContext), Object(Closure))\n#1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(874): MediaWiki->doPreOutputCommit(Object(Closure))\n#2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/MediaWiki.php(523): MediaWiki->main()\n#3 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(43): MediaWiki->run()\n#4 {main}
- curious, this line is just below this block, which mentions our "ChronologyProtector"
// Actually do the work of the request and build up any output $this->performRequest(); // GUI-ify and stash the page output in MediaWiki::doPreOutputCommit() while // ChronologyProtector synchronizes DB positions or slaves accross all datacenters. $buffer = null; $outputWork = function () use ( $output, &$buffer ) { if ( $buffer === null ) { $buffer = $output->output( true ); } return $buffer; }; // Now commit any transactions, so that unreported errors after // output() don't roll back the whole DB transaction and so that // we avoid having both success and error text in the response $this->doPreOutputCommit( $outputWork ); // Now send the actual output print $outputWork(); }
- as a sanity check, I saw that wikipedia also sets cookies (though *different* cookies) for a simple GET by an anon user too!
user@personal:~$ curl -si "https://en.wikipedia.org/wiki/Open_Source_Ecology" | head -n 30 | grep -i cookie Vary: Accept-Encoding,Cookie,Authorization Set-Cookie: WMF-Last-Access=15-Mar-2018;Path=/;HttpOnly;secure;Expires=Mon, 16 Apr 2018 00:00:00 GMT Set-Cookie: WMF-Last-Access-Global=15-Mar-2018;Path=/;Domain=.wikipedia.org;HttpOnly;secure;Expires=Mon, 16 Apr 2018 00:00:00 GMT X-Analytics: ns=0;page_id=32407015;https=1;nocookies=1 Set-Cookie: GeoIP=US:VA:Manassas:38.79:-77.54:v4; Path=/; secure; Domain=.wikipedia.org user@personal:~$
- I'll have to research how wikipedia's varnish config handles these WMF-Last-Access, WMF-Last-Access-Global, & GeoIP cookies. It may be translatable to how I should handle this cpPostTime cookie without affecting varnish caching.
Mon Mar 12, 2018
- tuning-out noise on ossec email alerts
- 990012 = bad robots
- 950109 = multiple url encoding detected
- 60912 = failed to parse request body
- 958291 = Range: field exists and begins with 0.
- 960035 = URL file extension is restricted by policy
- continuing to debug the no-cache headers on the wiki
[root@hetzner2 wiki.opensourceecology.org]# for file in $(echo $files); do grep -il 'no-cache, no-store, max-age=0, must-revalidate' $file; done htdocs/includes/OutputPage.php htdocs/includes/OutputPage.phpee htdocs/includes/specials/SpecialRevisiondelete.php htdocs/includes/specials/SpecialUndelete.php htdocs/extensions/ConfirmAccount/frontend/specialpages/actions/ConfirmAccount_body.php htdocs/extensions/ConfirmAccount/frontend/specialpages/actions/UserCredentials_body.php [root@hetzner2 wiki.opensourceecology.org]#
- OutputPage appears to default to using the cache. The code that sets our undesired header is encapsulated in an if whoose condition is triggered when 'mEnableClientCache' is not true. It defaults to true. It's flipped to false (causing no cache headers) when the enableClientCache() function is called. This occurs when the $parserOutput->isCacheable() is found to be false and when the page is an error page. Both of those cases are reasonable.
- SpecialRevisiondelete.php appears to use the no-cache headers only in the tryShowFile() function, which is apparently just used when displaying an old article that was deleted (at a specific version). This doesn't appear to be relevant.
- The SpecialUndelete.php file sets the headers in showFile(), but probably only when attempting to restore a deleted article. This doesn't appear to be relevant.
- And the ConfirmAccount scripts shouldn't be relevant as they're disabled.
- so y best guess is that it's still OutputPage.php, caused by the $isCacheable() function triggering a 'no'
- tried adding 'error_log' output into the OutputPage.php script to no avail; it doesn't appear to come from there
- did the error_log() output for debugging purposes on other files above
- couldn't reproduce with any, but then I noticed that the headers are different!
root@hetzner2 ConfirmAccount]# curl -I "http://127.0.0.1:8000/wiki/Main_Page" -H "Host: wiki.opensourceecology.org" HTTP/1.1 200 OK Date: Tue, 13 Mar 2018 00:10:05 GMT Server: Apache X-Content-Type-Options: nosniff Content-language: en X-UA-Compatible: IE=Edge Link: </images/ose-logo.png?be82f>;rel=preload;as=image Vary: Accept-Encoding,Cookie Expires: Thu, 01 Jan 1970 00:00:00 GMT Cache-Control: private, must-revalidate, max-age=0 Last-Modified: Mon, 12 Mar 2018 19:10:05 GMT X-XSS-Protection: 1; mode=block Set-Cookie: cpPosTime=1520899806.0221; expires=Tue, 13-Mar-2018 00:11:06 GMT; Max-Age=60; path=/; httponly;HttpOnly Content-Type: text/html; charset=UTF-8 [root@hetzner2 ConfirmAccount]#
- ok, I pinpointed *this* header being set in OutputPage.php block:
} else { # We do want clients to cache if they can, but they *must* check for updates # on revisiting the page. wfDebug( METHOD . ": private caching; {$this->mLastModified} **", 'private' ); $response->header( 'Expires: ' . gmdate( 'D, d M Y H:i:s', 0 ) . ' GMT' ); $response->header( "Cache-Control: private, must-revalidate, max-age=0" ); }
- so this is within an if statement:
if ( $this->mEnableClientCache ) {
- therefore, we know that mEnableClientCache is true at the time this header is being set
- the else is because this list of conditions equated to false
if ( $config->get( 'UseSquid' ) && !$response->hasCookies() && !SessionManager::getGlobalSession()->isPersistent() && !$this->isPrintable() && $this->mCdnMaxage != 0 && !$this->haveCacheVaryCookies() ) {
- debugging shows that the issue is the 3rd one, !SessionManager::getGlobalSession()->isPersistent()
- This rabbit hole is too deep; I'll just go with the mediawiki recommendation to configure varnish to ingore mediawiki's cache control headers.
- sent an email to Marcin & Catarina about Munin w/ a screenshot of most graphs for the past week
Tue Mar 06, 2018
- got an ossec alert for brute-force attacks on wp-login.php
- this shouldn't happen, since we're just 403-forbidden blocking this page
- discovered that the httpd config for this block was commented-out in oswh; fixed
- this shouldn't happen, since we're just 403-forbidden blocking this page
- disk usage is higher than I'd like, hovering at 85%
- installed `ncdu` = an ncurses disk usage analyzer
- there's more usage in /var/tmp (61G) than in /var/www (37G)
- most of the space is in /var/tmp/backups_for_migration_from_hetzner1 (58G)
- removed the contents of /var/tmp/backups_for_migration_from_hetzner1/wiki_20180120/old/20180214 (30G)
[maltfield@hetzner2 20180214]$ pwd /var/tmp/backups_for_migration_from_hetzner1/wiki_20180120/old/20180214 [maltfield@hetzner2 20180214]$ du -sh * 296M core 1.4G db.sql 17G htdocs 8.0K imagesLargerThan20M.txt 191M mysqldump_wiki.20180120.sql.bz2 12G wiki_files.20180120.tar.gz [maltfield@hetzner2 20180214]$ du -sh 30G . [maltfield@hetzner2 20180214]$
- continuing wiki varnish config
- found that mediawiki is producing headers in its response that would prevent varnish from caching the main page!
[maltfield@hetzner2 ~]$ curl -I "http://127.0.0.1:8000/wiki/Main_Page" -H "Host: wiki.opensourceecology.org" HTTP/1.1 200 OK Date: Tue, 06 Mar 2018 16:41:16 GMT Server: Apache X-Content-Type-Options: nosniff Content-language: en X-UA-Compatible: IE=Edge Link: </images/ose-logo.png?be82f>;rel=preload;as=image Vary: Accept-Encoding,Cookie Expires: Thu, 01 Jan 1970 00:00:00 GMT Cache-Control: private, must-revalidate, max-age=0 Last-Modified: Tue, 06 Mar 2018 16:34:44 GMT X-XSS-Protection: 1; mode=block Set-Cookie: cpPosTime=1520354476.8255; expires=Tue, 06-Mar-2018 16:42:16 GMT; Max-Age=60; path=/; httponly;HttpOnly Content-Type: text/html; charset=UTF-8 [maltfield@hetzner2 ~]$
- the recommended/documented varnish config on mediawiki's wiki shows the block respecting any of the "private|no-cache|no-store" Cache-Control headers sent by mediawiki commented-out in order to ignore mediawiki. https://www.mediawiki.org/wiki/Manual:Varnish_caching#Configuring_Varnish_4.x
In this example, the 'no-cache' flag is being ignored on pages served to anonymous-IP users. Such measures normally are only needed if a wiki is making extensive use of extensions which add this flag indiscriminately (such as a wiki packed with random <choose>/<option> Algorithm tags on the main page and various often-used templates).
# Called after a document has been successfully retrieved from the backend. sub vcl_backend_response { # set minimum timeouts to auto-discard stored objects set beresp.grace = 120s; if (beresp.ttl < 48h) { set beresp.ttl = 48h; } if (!beresp.ttl > 0s) { set beresp.uncacheable = true; return (deliver); } if (beresp.http.Set-Cookie) { set beresp.uncacheable = true; return (deliver); } # if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)") { # set beresp.uncacheable = true; # return (deliver); # } if (beresp.http.Authorization && !beresp.http.Cache-Control ~ "public") { set beresp.uncacheable = true; return (deliver); } return (deliver); }
- this issue was exactly what I just encountered on osmain, but in wordpress! There was a plugin named 'fundraising' that was preventing caching. Apparently there's an extension in mediawiki doing something similar.
- found a list of all extensions currently installed: https://wiki.opensourceecology.org/wiki/Special:Version
- disabled all extensions & confirmed that the issue was still present
- note that there's no easy way to do this; you have to just comment-out all the require & wfLoadExtension function calls in LocalSettings.php until Special:Version shows no extensions "installed"
[maltfield@hetzner2 ~]$ curl -I "http://127.0.0.1:8000/wiki/Main_Page?safemode=1" -H "Host: wiki.opensourceecology.org" HTTP/1.1 200 OK Date: Wed, 07 Mar 2018 00:19:58 GMT Server: Apache X-Content-Type-Options: nosniff Content-language: en X-UA-Compatible: IE=Edge Link: </images/ose-logo.png?be82f>;rel=preload;as=image Vary: Accept-Encoding,Cookie Expires: Thu, 01 Jan 1970 00:00:00 GMT Cache-Control: private, must-revalidate, max-age=0 Last-Modified: Wed, 07 Mar 2018 00:18:56 GMT X-XSS-Protection: 1; mode=block Set-Cookie: cpPosTime=1520381998.2197; expires=Wed, 07-Mar-2018 00:20:58 GMT; Max-Age=60; path=/; httponly;HttpOnly Content-Type: text/html; charset=UTF-8 [maltfield@hetzner2 ~]$
- fuck, even when I comment-out all the skins (breaking the site), the issue is still present
[maltfield@hetzner2 ~]$ curl -I "http://127.0.0.1:8000/wiki/Main_Page?safemode=1" -H "Host: wiki.opensourceecology.org" HTTP/1.1 200 OK Date: Wed, 07 Mar 2018 00:22:42 GMT Server: Apache X-Content-Type-Options: nosniff Content-language: en X-UA-Compatible: IE=Edge Vary: Accept-Encoding,Cookie Expires: Thu, 01 Jan 1970 00:00:00 GMT Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache X-XSS-Protection: 1; mode=block Set-Cookie: cpPosTime=1520382163.0842; expires=Wed, 07-Mar-2018 00:23:43 GMT; Max-Age=60; path=/; httponly;HttpOnly Content-Type: text/html; charset=UTF-8 [maltfield@hetzner2 ~]$
- found a list of all the files that have 'cache-control' in them in the wiki dir
[root@hetzner2 wiki.opensourceecology.org]# grep -irl 'cache-control' * htdocs/opensearch_desc.php htdocs/includes/actions/RawAction.php htdocs/includes/OutputPage.php htdocs/includes/AjaxResponse.php htdocs/includes/api/ApiMain.php htdocs/includes/api/i18n/ru.json htdocs/includes/api/i18n/de.jsone htdocs/includes/api/i18n/ru.jsone htdocs/includes/api/i18n/ba.json htdocs/includes/api/i18n/de.json htdocs/includes/api/i18n/ba.jsone htdocs/includes/OutputPage.phpee htdocs/includes/specials/SpecialRevisiondelete.php htdocs/includes/specials/SpecialUndelete.php htdocs/includes/specials/SpecialUploadStash.php htdocs/includes/specials/SpecialJavaScriptTest.php htdocs/includes/HeaderCallback.php htdocs/includes/resourceloader/ResourceLoader.php htdocs/includes/libs/filebackend/HTTPFileStreamer.php htdocs/includes/WebStart.php htdocs/includes/PHPVersionCheck.php htdocs/includes/OutputHandler.php htdocs/thumb.php htdocs/img_auth.php htdocs/extensions/ConfirmEdit/FancyCaptcha/FancyCaptcha.class.php htdocs/extensions/ConfirmAccount/frontend/specialpages/actions/ConfirmAccount_body.php htdocs/extensions/ConfirmAccount/frontend/specialpages/actions/UserCredentials_body.php wiki-error.log [root@hetzner2 wiki.opensourceecology.org]#
- found the possible sources
[root@hetzner2 wiki.opensourceecology.org]# for file in $(echo $files); do grep -il 'no-cache, no-store, max-age=0, must-revalidate' $file; done htdocs/includes/OutputPage.php htdocs/includes/OutputPage.phpee htdocs/includes/specials/SpecialRevisiondelete.php htdocs/includes/specials/SpecialUndelete.php htdocs/extensions/ConfirmAccount/frontend/specialpages/actions/ConfirmAccount_body.php htdocs/extensions/ConfirmAccount/frontend/specialpages/actions/UserCredentials_body.php [root@hetzner2 wiki.opensourceecology.org]#
- OutputPage appears to default to using the cache. The code that sets our undesired header is encapsulated in an if whoose condition is triggered when 'mEnableClientCache' is not true. It defaults to true
Mon Mar 05, 2018
- Updated log & hours
- checked munin, all looks good.
- removed the temp comments in front of the block to deny access to 'wp-login.php' from osemain.
- also noticed that (in addition to the blue "cache misses" graph nearly disappearing) the varnish4 "backend traffic" graph shows a great decrease in backend connections, especially the blue = "success", following my varnish change to cache backend responses with code = 404 not found
- began researching varnish integration with mediawiki
- this great article describes how Mediawiki's core was designed to work with Varnish specifically https://www.mediawiki.org/wiki/Manual:Varnish_caching#cite_note-2
- this is less of an issue for our wiki because we don't allow anonymous edits, but I added these lines to LocalSettings.php to record ip addresses using the "X-Forwarded-For" header instead of the Source IP, which would just be 127.0.0.1 with our varnish->apache architecture
$wgUseSquid = true; $wgSquidServers = array( '127.0.0.1', 'wiki.opensourceecology.org' ); $wgUsePrivateIPs = true; //Use $wgSquidServersNoPurge if you don't want MediaWiki to purge modified pages //$wgSquidServersNoPurge = array('127.0.0.1');
- found a list of squid (relevant to varnish) related options https://www.mediawiki.org/wiki/Manual:Configuration_settings#Squid
Sun Mar 04, 2018
- after enabling caching of 404s, the "Hit rates" graph of varnish4 in munin changed--the "cache misses" shrunk significantly. Now I just need to find & reduce the "Cache hits for pass"
- began testing why a request for 'https://www.opensourceecology.org/robots.txt' produces a HIT-FOR-PASS
root@personal:~# curl -i https://www.opensourceecology.org/robots.txt HTTP/1.1 200 OK Server: nginx Date: Sun, 04 Mar 2018 19:10:42 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 182 Connection: keep-alive Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Link: <https://www.opensourceecology.org/wp-json/>; rel="https://api.w.org/" X-XSS-Protection: 1; mode=block Set-Cookie: OSESESSION=ee8abcqulnd7f8e9jbv1gve33cht0fuohmje7bps0iokihfhnah4c5ib4r95vpv9muq2ccqms2vhjf469rfj75gv1bik423vkobovv3; path=/; HttpOnly;HttpOnly X-Varnish: 2109565 Age: 0 Via: 1.1 varnish-v4 Accept-Ranges: bytes Strict-Transport-Security: max-age=15552001 Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report" Sitemap: https://www.opensourceecology.org/sitemap.xml Sitemap: https://www.opensourceecology.org/news-sitemap.xml User-agent: * Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php root@personal:~#
- strangely, this file doesn't even exist!
[root@hetzner2 ~]# find /var/www/html/www.opensourceecology.org | grep -i robots.txt [root@hetzner2 ~]#
- it also works if I just hit apache directly, locally
[root@hetzner2 ~]# curl -i 'http://127.0.0.1:8000/robots.txt' -H 'Host: www.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 19:14:38 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Link: <https://www.opensourceecology.org/wp-json/>; rel="https://api.w.org/" X-XSS-Protection: 1; mode=block Set-Cookie: OSESESSION=bear8i55veadv2osgrpsg4m6t45n4r1c1gdcveomhbftk26okrgvjmeafge1poj5a0kkrakqn5ghbb7s4r0bqmiic3o9l5djpu5nmc3; path=/; HttpOnly;HttpOnly Content-Length: 182 Content-Type: text/plain; charset=utf-8 Sitemap: https://www.opensourceecology.org/sitemap.xml Sitemap: https://www.opensourceecology.org/news-sitemap.xml User-agent: * Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php [root@hetzner2 ~]#
- indeed, this behaviour differs from fef, another one of our fef wordpress sites. note that the "Expires" & "Cache-Control" headers are absent in Apache's response
[root@hetzner2 ~]# curl -i 'http://127.0.0.1:8000/robots.txt' -H 'Host: fef.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 19:19:56 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Link: <https://fef.opensourceecology.org/wp-json/>; rel="https://api.w.org/" X-XSS-Protection: 1; mode=block Content-Length: 67 Content-Type: text/plain; charset=utf-8 User-agent: * Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php
- I'm beginning to think that it's one of the osemain plugins causing this issue
- it appears that there's some sort of apache-level (or os-level) caching going on too. note that the first request below takes 10 fucking seconds. the second takes less than 1 second.
[root@hetzner2 ~]# time curl -I 'http://127.0.0.1:8000/apple-touch-icon.png' -H 'Host: fef.opensourceecology.org' HTTP/1.1 404 Not Found Date: Sun, 04 Mar 2018 20:34:18 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Link: <https://fef.opensourceecology.org/wp-json/>; rel="https://api.w.org/" X-XSS-Protection: 1; mode=block Content-Type: text/html; charset=UTF-8 real 0m10.150s user 0m0.002s sys 0m0.001s [root@hetzner2 ~]# time curl -I 'http://127.0.0.1:8000/apple-touch-icon.png' -H 'Host: fef.opensourceecology.org' HTTP/1.1 404 Not Found Date: Sun, 04 Mar 2018 20:34:35 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Link: <https://fef.opensourceecology.org/wp-json/>; rel="https://api.w.org/" X-XSS-Protection: 1; mode=block Content-Type: text/html; charset=UTF-8 real 0m0.125s user 0m0.002s sys 0m0.001s [root@hetzner2 ~]#
- if I wait a bit (I've observed this after 3 minutes) & re-run the above, it takes another 10 seconds.
- it looks like the "expires" & "cache-control" headers are set by wp natively (or, at most, a plugin calling this native function) using wp_get_nocache_headers and related functions https://developer.wordpress.org/reference/functions/wp_get_nocache_headers/
- wp-includes/class-wp.php appears to applying the above function for 404s, that's annoying (ie: we want to cache 404s for requests that are slamming our site constantly, such as '/apple-touch-icon.png' https://developer.wordpress.org/reference/classes/wp/handle_404/
- removed google analytics code from osemain via Appearance -> Theme Options -> Footer
<script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-58526017-1', 'auto'); ga('send', 'pageview'); </script> <pre> # created a staging site for osemain so I could tear down plugins & themes to isolate the cause of the call to nocache_headers() ## renamed 'osemain' A record on CF DNS to 'staging' for staging.opensourceecology.org ## created /etc/httpd/conf.d/staging.opensourceecology.org.conf ## created staging vhost files & db. note that the max char limit for a db username is 16, so I used "osemain_s_user" & "osemain_s_db" <pre> source /root/backups/backup.settings prodVhostDir=/var/www/html/www.opensourceecology.org stagingVhostDir=/var/www/html/staging.opensourceecology.org prodDbName=osemain_db stagingDbName=osemain_s_db stagingDbUser=osemain_s_user stagingDbPass=CHANGEME
- updated the wp-config.php to be http, not https, so I can just query with curl (otherwise it 301's to https)
- successfully reproduced the issue on the staging site
[root@hetzner2 staging.opensourceecology.org]# time curl -I 'http://127.0.0.1:8000/' -H "Host: staging.opensourceecology.org" HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:45:23 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Set-Cookie: OSESESSION=b3kuv36bt8fi8j9j07heh3a8epvug5fdq4go0ajrgsv4gq4aevvpi2v5ms212o68pdq4tkrjfde79274bbus3nvs9fvam273i6vohg0; path=/; HttpOnly;HttpOnly Content-Type: text/html; charset=UTF-8 real 0m10.331s user 0m0.003s sys 0m0.000s [root@hetzner2 staging.opensourceecology.org]# time curl -I 'http://127.0.0.1:8000/' -H "Host: staging.opensourceecology.org" HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:45:35 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Set-Cookie: OSESESSION=p9gdj5vsrhh18t0ahlv60f68eq35slum7bjv3vb98v3rbplifj9a0e2lpqhh67lsdmi6h1d1krj2pah0ja0vaa8gvrgeokqkp0sha72; path=/; HttpOnly;HttpOnly Content-Type: text/html; charset=UTF-8 real 0m0.315s user 0m0.002s sys 0m0.001s [root@hetzner2 staging.opensourceecology.org]#
- I removed the plugins directory in the staging docroot & replaced it with an empty dir, and confirmed that the issue went away!
[root@hetzner2 wp-content]# mv plugins plugins.old [root@hetzner2 wp-content]# mkdir plugins [root@hetzner2 wp-content]# time curl -I 'http://127.0.0.1:8000/' -H 'Host: staging.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:49:00 GMT Server: Apache Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Content-Type: text/html; charset=UTF-8 real 0m12.423s user 0m0.002s sys 0m0.002s [root@hetzner2 wp-content]#
- I copied in my base plugin set from the old plugins dir, and confirmed that the headers look ok still
[root@hetzner2 wp-content]# ls plugins force-strong-passwords google-authenticator google-authenticator-encourage-user-activation rename-wp-login ssl-insecure-content-fixer vcaching [root@hetzner2 wp-content]# time curl -I 'http://127.0.0.1:8000/' -H 'Host: staging.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:52:36 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Content-Type: text/html; charset=UTF-8 real 0m13.186s user 0m0.002s sys 0m0.002s [root@hetzner2 wp-content]#
- I iteratively copied-in the plugins one-by-one that were activated on the prod site, until I found the issue crop-up after I added the "fundraising" plugin
[root@hetzner2 wp-content]# rsync -a plugins.old/akismet/ plugins/ [root@hetzner2 wp-content]# time curl -I 'http://127.0.0.1:8000/' -H 'Host: staging.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:55:27 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Content-Type: text/html; charset=UTF-8 real 0m10.166s user 0m0.002s sys 0m0.001s [root@hetzner2 wp-content]# rsync -a plugins.old/brankic-photostream-widget plugins/ [root@hetzner2 wp-content]# time curl -I 'http://127.0.0.1:8000/' -H 'Host: staging.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:55:45 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Content-Type: text/html; charset=UTF-8 real 0m0.134s user 0m0.003s sys 0m0.000s [root@hetzner2 wp-content]# rsync -a plugins.old/duplicate-post plugins/ [root@hetzner2 wp-content]# time curl -I 'http://127.0.0.1:8000/' -H 'Host: staging.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:56:02 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Content-Type: text/html; charset=UTF-8 real 0m0.133s user 0m0.001s sys 0m0.002s [root@hetzner2 wp-content]# rsync -a plugins.old/fundraising plugins/ [root@hetzner2 wp-content]# time curl -I 'http://127.0.0.1:8000/' -H 'Host: staging.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:56:17 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Set-Cookie: OSESESSION=d679j00fdalqp4usqe0iuf71vs6jhl29bbdds5e2n56i0p15m1379vnnlj2c1hlunjn061l6rtkij0141mpsvagnei6g8anihk842i2; path=/; HttpOnly;HttpOnly Content-Type: text/html; charset=UTF-8 real 0m0.144s user 0m0.001s sys 0m0.002s
- I replaced the temp plugins dir with the original dir & confirmed the issue was present
root@hetzner2 wp-content]# rm -rf plugins [root@hetzner2 wp-content]# mv plugins.old plugins [root@hetzner2 wp-content]# time curl -I 'http://127.0.0.1:8000/' -H 'Host: staging.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:58:51 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Set-Cookie: OSESESSION=uvjm8sqbdl1uqbmrr27g5qns54sg0enk177m1efqpuournqneir1qtruoqdu6ph3go9vqflrpg58gq1hrn09hoqrrs4a6ttfeukcgs3; path=/; HttpOnly;HttpOnly Content-Type: text/html; charset=UTF-8 real 0m11.097s user 0m0.001s sys 0m0.002s
- I moved out the 'fundraising' plugin, and confirmed that the issue was fixed by this one change
[root@hetzner2 wp-content]# time curl -I 'http://127.0.0.1:8000/' -H 'Host: staging.opensourceecology.org' HTTP/1.1 200 OK Date: Sun, 04 Mar 2018 22:58:51 GMT Server: Apache X-VC-Enabled: true X-VC-TTL: 86400 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Link: <http://staging.opensourceecology.org/wp-json/>; rel="https://api.w.org/" Link: <http://staging.opensourceecology.org/>; rel=shortlink X-XSS-Protection: 1; mode=block Set-Cookie: OSESESSION=uvjm8sqbdl1uqbmrr27g5qns54sg0enk177m1efqpuournqneir1qtruoqdu6ph3go9vqflrpg58gq1hrn09hoqrrs4a6ttfeukcgs3; path=/; HttpOnly;HttpOnly Content-Type: text/html; charset=UTF-8 real 0m11.097s user 0m0.001s sys 0m0.002s
- I added staging.opensourceecology.org vhost configs to nginx & varnish
- updated cert to include the staging SAN
certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/staging.opensourceecology.org/htdocs -d staging.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d munin.opensourceecology.org /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* nginx -t && service nginx reload
- added '950120' [Possible Remote File Inclusion (RFI) Attack: Off-Domain Reference/Link] to the modsec ignore list in /var/ossec/rules/local_rules.xml as it was too spammy
- updated Wordpress documentation for file permissions & wp-cli troubleshooting per what was discovered during the osemain migration (for plugins, themes, & upgrade dirs)
- discovered that the $wgDebugComments variable puts the the mediawiki debug messages (which may contain sensitive user or server info!) directly in the html output! https://www.mediawiki.org/wiki/Manual:Configuration_settings#Debug/logging
- removed this from LocalSettings.php & added a strong warning message telling admins to *only* use $wgDebugLogFile, which exists *outside* the docroot
- added rules to ossec to detect & active response (block via iptables) brute force attempts over mediawiki
<!-- Mediawiki Special:UserLogin brute force --> <rule id="100060" level="3"> <if_sid>31530</if_sid> <url>Special:UserLogin</url> <regex>POST</regex> <description>Mediawiki login attempt.</description> </rule> <!-- If we see frequent mediawiki login POST's, it is likely a bot. --> <rule id="100061" level="8" frequency="6" timeframe="30"> <if_matched_sid>100060</if_matched_sid> <same_source_ip /> <descript
- tested the above rules & verified that I was banned via iptables when I kicked off a bunch of simultanious login attempts with curl
while true; do date; curl --data "wpName=superman&wpPassword1=letmein" "https://wiki.opensourceecology.org/index.php?title=Special:UserLogin"; echo; done
- note that I had to execute this in many terminals at once, as the request<-->response time was too long to actually trigger the ban
- I'm now finished with the general hardening of mediawiki. Next up: varnish integration (and hardening of *that* config)
Sat Mar 03, 2018
- all munin graphs look great. I'll let it populate for 2 weeks before delivering to Marcin.
- updated StatusCake for osemain from 'http://opensourceecology.org' to 'https://www.opensourceecology.org/'
- saw that, indeed, statuscake shows that the site sped up significantly post migration. Before, our load times averaged ~1.3 seconds. Now it's showing ~0.3 seconds
- munin is showing about a 45% hit rate. that's lower than I would expect.
- I found a command to get a list of the cache misses
[root@hetzner2 ~]# varnishtop -i BereqURL list length 38 hetzner2.opensourceecology.org 3.49 BereqURL /apple-touch-icon.png 3.45 BereqURL /apple-touch-icon-precomposed.png 2.31 BereqURL /favicon.png 2.28 BereqURL /apple-touch-icon-120x120.png 2.24 BereqURL /apple-touch-icon-120x120-precomposed.png 1.43 BereqURL /apple-touch-icon-152x152.png 1.40 BereqURL /apple-touch-icon-152x152-precomposed.png 0.57 BereqURL /category/proposals/feed/ 0.48 BereqURL /robots.txt 0.46 BereqURL /localhost/localhost/varnish_expunge-day.png 0.46 BereqURL /static/logo-h.png 0.46 BereqURL /localhost/localhost/varnish_bad-day.png 0.46 BereqURL /localhost/localhost/varnish_uptime-day.png 0.46 BereqURL /localhost/localhost/varnish_objects-day.png 0.46 BereqURL /localhost/localhost/varnish_threads-day.png 0.46 BereqURL /localhost/localhost/varnish_memory_usage-day.png 0.46 BereqURL /localhost/localhost/varnish_request_rate-day.png 0.46 BereqURL /localhost/localhost/varnish_transfer_rates-day.png 0.46 BereqURL /localhost/localhost/varnish_hit_rate-day.png 0.46 BereqURL /static/style-new.css 0.46 BereqURL /localhost/localhost/varnish_backend_traffic-day.png 0.46 BereqURL /varnish4-day.html 0.46 BereqURL /category/ted-fellows/ 0.38 BereqURL / ... [root@hetzner2 ~]#
- the above output shows "BereqURL" occurrences in `varnishlog` = backend requests = cache misses
- We see that the '/apple-touch-icon.png' file is requested about 3 times per minute. It's actually just a 404. This shows that it's not caching this 404, which would be a trivial saver
- We also see that the '/' is being requested ~0.38 times per second. So about every 3 seconds, varnish is troubling apache for the main page.
- why? I want this cached for 24 hours. Looks like we could be better optimized
- dug back in my logs to 2017-11-24, when I configured wordpress' apache no what to cache by response code
# Avoid caching error responses #if (beresp.status == 404 || beresp.status >= 500) { if ( beresp.status != 200 && beresp.status != 203 && beresp.status != 300 && beresp.status != 301 && beresp.status != 302 && beresp.status != 304 && beresp.status != 307 && beresp.status != 410 ) { set beresp.ttl = 0s; set beresp.grace = 15s; }
- according to the (still outstanding) bug report I filed 3 months ago, I didn't cache 404 only because the developer of the worpdress plugin didn't cache 404. https://wordpress.org/support/topic/please-dont-cache-403-by-default/
- but varnish, by default, *does* cache 404 https://book.varnish-software.com/4.0/chapters/VCL_Basics.html#the-initial-value-of-beresp-ttl
- therefore, I think I *should* cache 404. Or at least experiment with it.
- I changed it on prod for www.opensourceecology.org to cache 404s too
# Avoid caching error responses #if (beresp.status == 404 || beresp.status >= 500) { if ( beresp.status != 200 && beresp.status != 203 && beresp.status != 300 && beresp.status != 301 && beresp.status != 302 && beresp.status != 304 && beresp.status != 307 && beresp.status != 410 && beresp.status != 404 ) { set beresp.ttl = 0s; set beresp.grace = 15s; }
- I noticed that varnishlog shows some HITs with an already-expired header
- ReqURL / - VCL_return hash - ReqUnset Accept-Encoding: gzip, deflate, br - ReqHeader Accept-Encoding: gzip - VCL_call HASH - ReqHeader hash: #www.opensourceecology.org - VCL_return lookup - Hit 83893 - VCL_call HIT - VCL_return deliver - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Sat, 03 Mar 2018 19:52:48 GMT - RespHeader Server: Apache - RespHeader X-VC-Enabled: true - RespHeader X-VC-TTL: 86400 - RespHeader Expires: Thu, 19 Nov 1981 08:52:00 GMT - RespHeader Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 - RespHeader Pragma: no-cache
- why was this "Expires: Thu, 19 Nov 1981 08:52:00 GMT"?
- why was this "Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0"?
- why was this "Pragma: no-cache"?
- a much simpler test is '/robots.txt'. Why is this not caching?
Fri Mar 02, 2018
- Marcin followed-up on osemain
- he said it's running much faster :)
- he found a modesec false-positive when attempting to modify https://www.opensourceecology.org/open-building-institute-nears-launch/
- I whitelisted 959070, sqli
- began doing some ossec tuning
- after osemain migrated to hetzner2, the alert volume has become unreasonably high, so I want to prevent alerts from emailing in a few cases
- rule 30411 = modsecurity where the following strings match
- 960020 = Pragma Header requires Cache-Control Header
- 960009 = Request Missing a User Agent Header
- rule 31123 = Web server 503 error code (Service unavailable)
- but we'll not disable 31163 = Multiple web server 503 error code (Service unavailable).
- rule 30411 = modsecurity where the following strings match
- after osemain migrated to hetzner2, the alert volume has become unreasonably high, so I want to prevent alerts from emailing in a few cases
- returned to configuring cacti
- created nginx, varnish, & http config files
- created log dirs for nginx & apache
- created cacti.opensourceecology.org DNS entry pointing to 138.201.84.243 on CloudFlare
- update /etc/php.ini to include the cacti dir
- created cacti db & user per https://skrinhitam.wordpress.com/2017/04/21/how-to-install-cacti-on-centos-7/
CREATE DATABASE cacti_db; CREATE USER 'cacti_user'@'localhost' IDENTIFIED BY 'CHANGEME'; GRANT ALL PRIVILEGES ON cacti_db.* TO 'cacti_user'@'localhost'; FLUSH PRIVILEGES;
- updated /usr/share/cacti/include/confgi.php with credentials
- confirmed that we're running cacti v1.1.36
[root@hetzner2 cacti]# cat /usr/share/cacti/include/cacti_version 1.1.36 [root@hetzner2 cacti]#
- according to the website, the latest version is 1.1.36 from 2018-02-25. So we're latest, and it was updated within a month ago. That's great. https://www.cacti.net/download_cacti.php
- finally got it connected; sockets was nontrivial; you actually have to define the port as 3306, though it's a misnomer (misnumber?)
$database_type = 'mysql'; $database_default = 'cacti_db'; $database_hostname = 'localhost'; $database_username = 'cacti_user'; $database_password = 'CHANGEME'; $database_port = '3306';
- this time I got an error that the db was not initialized (it's true; there's no tables!), but the sql file they're telling me to use doesn't exist!
The Cacti Database has not been initialized. Please initilize it before continuing. To initilize the Cacti database, issue the following commands either as root or using a valid account. mysqladmin -uroot -p create cacti mysql -uroot -p -e "grant all on cacti.* to 'someuser'@'localhost' identified by 'somepassword'" mysql -uroot -p -e "grant select on mysql.time_zone_name to 'someuser'@'localhost' identified by 'somepassword'" mysql -uroot -p cacti < /pathcacti/cacti.sql Where /pathcacti/ is the path to your Cacti install location. Change someuser and somepassword to match your site preferences. The defaults are cactiuser for both user and password. NOTE: When installing a remote poller, the config.php file must be writable by the Web Server account, and must include valid connection information to the main Cacti server. The file should be changed to read only after the install is completed.
- ah, the distro put it into /usr/share/doc/cacti-1.136/cacti.sql ..of course (?)
[root@hetzner2 cacti]# rpm -ql cacti | grep cacti.sql /usr/share/doc/cacti-1.1.36/cacti.sql [root@hetzner2 cacti]#
- I initialized the db with this file
mysql -u"cacti_user" -p"CHANGEME" cacti_db < /usr/share/doc/cacti-1.1.36/cacti.sql
- set `date.timezone = "UTC"` in /etc/php.ini
- I had to comment-out the default config's block that blocks all non-localhost traffic (my .htpasswd config should suffice)
<Directory /usr/share/cacti/> # <IfModule mod_authz_core.c> # # httpd 2.4 # Require host localhost # </IfModule> </Directory>
- granted the new cacti user SELECT permissions into the mysql.time_zone table
GRANT SELECT ON mysql.time_zone_name TO cacti_user@localhost; flush privileges;
- populated the mysql timezone database, following a backup
sudo su - source /root/backups/backup.settings stamp=`date +%Y%m%d_%T` tmpDir=/var/tmp/dbChange.$stamp mkdir $tmpDir chown root:root $tmpDir chmod 0700 $tmpDir pushd $tmpDir service httpd stop # create backup of all DBs for good measure time nice mysqldump -u"root" -p"${mysqlPass}" --all-databases | gzip -c > preBackup.all_databases.$stamp.sql.gz service httpd start mysql_tzinfo_to_sql /usr/share/zoneinfo/ | mysql -u"root" -p"${mysqlPass}" mysql
- expanded the cert to include cacti
certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d cacti.opensourceecology.org /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* nginx -t && service nginx reload
- Cacti was very unhappy with our mariadb settings. I'm not concerned as this is just for 1 server, and I don't want to fuck with the db backend which is prod on so many sites. But I'll leave this here for records
MariaDB Tuning (/etc/my.cnf) - [ Documentation ] Note: Many changes below require a database restart Variable Current Value Recommended Value Comments version 5.5.56-MariaDB >= 5.6 MySQL 5.6+ and MariaDB 10.0+ are great releases, and are very good versions to choose. Make sure you run the very latest release though which fixes a long standing low level networking issue that was causing spine many issues with reliability. collation_server latin1_swedish_ci utf8_general_ci When using Cacti with languages other than English, it is important to use the utf8_general_ci collation type as some characters take more than a single byte. If you are first just now installing Cacti, stop, make the changes and start over again. If your Cacti has been running and is in production, see the internet for instructions on converting your databases and tables if you plan on supporting other languages. character_set_client latin1 utf8 When using Cacti with languages other than English, it is important to use the utf8 character set as some characters take more than a single byte. If you are first just now installing Cacti, stop, make the changes and start over again. If your Cacti has been running and is in production, see the internet for instructions on converting your databases and tables if you plan on supporting other languages. max_connections 151 >= 100 Depending on the number of logins and use of spine data collector, MariaDB will need many connections. The calculation for spine is: total_connections = total_processes * (total_threads + script_servers + 1), then you must leave headroom for user connections, which will change depending on the number of concurrent login accounts. max_allowed_packet 1048576 >= 16777216 With Remote polling capabilities, large amounts of data will be synced from the main server to the remote pollers. Therefore, keep this value at or above 16M. tmp_table_size 16M >= 64M When executing subqueries, having a larger temporary table size, keep those temporary tables in memory. join_buffer_size 0.125M >= 64M When performing joins, if they are below this size, they will be kept in memory and never written to a temporary file. innodb_file_per_table OFF ON When using InnoDB storage it is important to keep your table spaces separate. This makes managing the tables simpler for long time users of MariaDB. If you are running with this currently off, you can migrate to the per file storage by enabling the feature, and then running an alter statement on all InnoDB tables. innodb_doublewrite ON OFF With modern SSD type storage, this operation actually degrades the disk more rapidly and adds a 50% overhead on all write operations. innodb_additional_mem_pool_size 8M >= 80M This is where metadata is stored. If you had a lot of tables, it would be useful to increase this. innodb_lock_wait_timeout 50 >= 50 Rogue queries should not for the database to go offline to others. Kill these queries before they kill your system. innodb_flush_log_at_trx_commit 1 2 Setting this value to 2 means that you will flush all transactions every second rather than at commit. This allows MariaDB to perform writing less often.
- I setup /etc/snmpd/snmpd.conf with a basic config
- I got cacti to add the device & connect to it
- I couldn't get cacti to add a graph. debugging the apache logs show that it was trying to execute spine, which isn't installed
- I uncommented the poller.php defined in
- the poller was pretty unhappy about our hardened php.ini
[root@hetzner2 snmp]# sudo -u cacti /usr/share/cacti/poller.php PHP Warning: ini_set() has been disabled for security reasons in /usr/share/cacti/include/global.php on line 65 PHP Warning: file_exists(): open_basedir restriction in effect. File(localhost) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/usr/share/cacti/:/etc/cacti/) in /usr/share/cacti/lib/database.php on line 65 PHP Warning: file_exists(): open_basedir restriction in effect. File(/usr/local/spine/bin/spine) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/usr/share/cacti/:/etc/cacti/) in /usr/share/cacti/include/global_arrays.php on line 560 PHP Warning: php_uname() has been disabled for security reasons in /usr/share/cacti/poller.php on line 79 03/02/2018 21:21:41 - POLLER: Poller[1] WARNING: Cron is out of sync with the Poller Interval! The Poller Interval is '300' seconds, with a maximum of a '300' second Cron, but 1.168,2 seconds have passed since the last poll! PHP Warning: ini_set() has been disabled for security reasons in /usr/share/cacti/poller.php on line 256 PHP Warning: ini_set() has been disabled for security reasons in /usr/share/cacti/poller.php on line 257 PHP Warning: file_exists(): open_basedir restriction in effect. File(/bin/php) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/usr/share/cacti/:/etc/cacti/) in /usr/share/cacti/lib/poller.php on line 124 PHP Warning: system() has been disabled for security reasons in /usr/share/cacti/lib/poller.php on line 143 PHP Warning: exec() has been disabled for security reasons in /usr/share/cacti/lib/poller.php on line 131 PHP Warning: file_exists(): open_basedir restriction in effect. File(/bin/php) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/var/www/html/cacti.opensourceecology.org/:/usr/share/cacti/:/etc/cacti/) in /usr/share/cacti/lib/poller.php on line 124 PHP Warning: system() has been disabled for security reasons in /usr/share/cacti/lib/poller.php on line 143 PHP Warning: exec() has been disabled for security reasons in /usr/share/cacti/lib/poller.php on line 131 PHP Warning: putenv() has been disabled for security reasons in /usr/share/cacti/lib/rrd.php on line 55 PHP Warning: popen() has been disabled for security reasons in /usr/share/cacti/lib/rrd.php on line 97 ^C03/02/2018 21:23:11 - POLLER: Poller[1] WARNING: Cacti Master Poller process terminated by user
- therefore, I think we need spine
- spine is not in the yum repos :(
- this isn't looking good https://forums.cacti.net/post-91042.html
- ok, I'm abandoning Cacti; let's try Munin. It emphasizes that it works well OOTB, and it appears to run perl on the back-end, only producing static html pages. That's the most ideal. Just give us a whole docroot that is totally static content!
- began attempting to get munin working
- I installed it with `yum install munin` and it automatically created a dir with static files in /var/www/html/munin
- I started the service `service munin-node start`
- It works! That. Was. Fucking. Awesome. Now I just need to wait 30 minutes per https://www.tecmint.com/install-munin-network-monitoring-in-rhel-centos-fedora/
- Holy hell, it includes varnish stats ootb!! What is this awesomeness?!?
- no going back; I'm renaming all the 'cacti' configs 'munin'
- changed the cert SAN from cacti to munin
certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org -d munin.opensourceecology.org /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* nginx -t && service nginx reload
- varnish graphs weren't populating; I think it's because we need the varnish4 plugin https://blog.raika.fr/index.php/2015/06/14/varnish-cache-4-et-munin-node/
pushd /etc/munin/plugins wget https://raw.githubusercontent.com/munin-monitoring/contrib/master/plugins/varnish/varnish4_ chmod +x varnish4_ mv varnish_hit_rate varnish_hit_rate.bak mv varnish_backend_traffic varnish_backend_traffic.bak mv varnish_bad varnish_bad.bak mv varnish_expunge varnish_expunge.bak mv varnish_memory_usage varnish_memory_usage.bak mv varnish_objects varnish_objects.bak mv varnish_request_rate varnish_request_rate.bak mv varnish_threads varnish_threads.bak mv varnish_transfer_rates varnish_transfer_rates.bak mv varnish_uptime varnish_uptime.bak ln -s varnish4_ varnish_hit_rate ln -s varnish4_ varnish_backend_traffic ln -s varnish4_ varnish_bad ln -s varnish4_ varnish_expunge ln -s varnish4_ varnish_memory_usage ln -s varnish4_ varnish_objects ln -s varnish4_ varnish_request_rate ln -s varnish4_ varnish_threads ln -s varnish4_ varnish_transfer_rates ln -s varnish4_ varnish_uptime ln -s varnish4_ varnish_hit_rate cat >> /etc/munin/plugin-conf.d/munin-node << EOF [varnish4_*] user root env.varnishstat /bin/varnishstat EOF munin-node-configure --shell service munin-node restart
- test shows it's working for hit_rate
[root@hetzner2 plugins]# munin-run varnish_hit_rate Line is not well formed (env.name) at /usr/share/perl5/vendor_perl/Munin/Node/Service.pm line 110. at /etc/munin/plugin-conf.d/munin-node line 10. Skipping the rest of the file at /usr/share/perl5/vendor_perl/Munin/Node/Service.pm line 110. client_req.value 726315 cache_miss.value 220489 cache_hitpass.value 17813 cache_hit.value 488013 [root@hetzner2 plugins]#
- confirmed that data is coming in, but under the category "webserver" instead of "varnish". I see no obvious way to change that *shrug*
Thr Mar 01, 2018
- began to prepare for my production migration of the osemain site (www.opensourceecology.org)
- because the domain name is shared by both osemain & the wiki (as opposed to having them on distinct domains or subdomains), the DNS cutover may cause issues with the wiki
- currently, the naked domain name 'opensourceecology.org' points to '78.46.3.178' = hetzner1
- currently, 'www' is a cloud-fronted CNAME to the above naked domain name 'opensourceecology.org'
- I will be deleting the above 'www' CNAME & replacing it with a non-CF'd A record pointing to '138.201.84.243' = hetzner2
- therefore, wiki requests made to 'opensourceecology.org/wiki/...' should be unimpacted. However, anything that links to the wiki at 'www.opensourceecology.org/wiki/...' will be impacted. This is a fairly minor concern that should be fixable with an (nginx?) redirect on hetzner2 for anything '/wiki/...', simply stripping-out the 'www'.
- there may be issues in the other direction, namely that someone wants to get our wordpress site by going to 'opensourceecology.org' but hits our wiki instead. I don't intend to fix this unless we see a ton of 404s or hear an outcry of issues.
- in any case, for good measure, I'll make the wiki read-only during the osemain migration by creating the file at hetzner1:/usr/home/osemain/public_html/w/read-only-message
- I did some research on how to put up a warning for the upcoming maintenance on the wiki as a banner on all pages, and I found this plugin--which is the same plugin that Wikipedia famously uses in their donation campaigns https://www.mediawiki.org/wiki/Extension:CentralNotice
- the above article actually recommended just using Sitenotice for a single wiki https://www.mediawiki.org/wiki/Manual:Interface/Sitenotice
- Sitenotice can be updated using the $wgSiteNotice variable in LocalSettings.php
- tested it on our stating site with success
$wgSiteNotice = '<div style="margin: 10px 0px; padding:12px; color: #9F6000; background-color: #FEEFB3;"><i class="fa fa-warning"></i>NOTICE: This wiki will be temporarily made READ-ONLY during a maintenance window today at 15:00 UTC. Please finish & save any pending changes to avoid data loss.</div>';
- applied the change to the production site at hetzner1. It worked!
- began the production migration of osemain CHG-2018-02-05
- first, I locked the prod wiki by creating this file
osemain@dedi978:~/public_html/w$ cat read-only-message The wiki is currently locked for maintenance. Please check back in a few hours. osemain@dedi978:~/public_html/w$
- I then began the full-system backups on both hetzner 1 & hetzner 2 simultaneously
# hetzner 1 # STEP 0: CREATE BACKUPS source /usr/home/osemain/backups/backup.settings /usr/home/osemain/backups/backup.sh # when finished, SSH into the dreamhost server to verify that the whole system backup was successful before proceeding bash -c 'source /usr/home/osemain/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh backups/hetzner1/*' # hetzner 2 sudo su - # STEP 0: CREATE BACKUPS # for good measure, trigger a backup of the entire system's database & files: time /bin/nice /root/backups/backup.sh &>> /var/log/backups/backup.log # when finished, SSH into the dreamhost server to verify that the whole system backup was successful before proceeding bash -c 'source /root/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh backups/hetzner2/*'
- 16:30: ugh, after 1.5 hours, the full backup was still running on hetzner1. I'll proceed anyway with the migration, but try to refrain from any removal of content from hetzner1 until after this backup completion is confirmed
- 16:31: began backup of osemain-specific db & files
- 15:33: osemain-specific backups complete
- confirmed that the system-level backups were done being made on hetzner2, but they were still pending scp to our storage server. eta on the web root file alone was over an hour, so I proceeded
- 15:36: scp'd the osemain backup files from hetzner1 to hetzner2. This completed in <1 min
- 15:38 finished polishing the db.sql dump & replaced the staging db with this fresh copy
- 15:40 finished creating new vhost files
- I got errors when attempting to update the plugins using wp-cli due to the hardened permissions. It's less-than-ideal, but I updated them to grant the apache group write permissions to 'wp-content/themes' & 'wp-content/plugins' as well as 'uploads'
- err, I actually needed to update the permissions in 'wp-content/upgrade'
- 16:04: finished wp-cli updates
- 16:05: destroyed the existing CNAME for 'www' on opensourceecology.org via CloudFlare & re-created it to point to '138.201.84.243' with a 2 min (the min) TTL.
- 16:11 confirmed the domain name change worked
- fixed /etc/nginx/conf.d/www.opensourceecology.org/conf to use 'www' instead of the temp subdomain 'osemain'
- fixed /etc/httpd/conf.d/000-www.opensourceecology.org.conf
- restarted both httpd & nginx
- fixed the wordpress domain in WP_HOME & WP_SITEURL in /var/www/html/www.opensourceecolgy.org/wp-config.php by replacing 'osemain' with 'www'
- got a cert warning from the browser; indeed, 'www' hasn't been added to the cert's SAN yet
- fixed the cert, replacing 'osemain' with 'www'
certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* nginx -t && service nginx reload
- 16:18 successfully loaded the site without
- 16:19 confirmed that both system-wide backups haven't completed yet..
- confirmed that hitting 'opensourceecology.org' doesn't actually take you to the wiki--it still takes you to the old site. After the backups finish, I'll have to move the old wp site's content out of the docroot & install a redirect for queries to the naked root (opensourceecology.org/) to 301 redirect to 'https://www.opensourceecology.org'
- confirmed that the wiki was still accessible at 'http://opensourceecology.org/wiki/'
- confirmed that the wiki was *not* accessible at 'www.opensourceecology.org/wiki/' that now tries to access the wordpress site on hetzner2. I should add a redirect for this--at least for now
- 16:34 finished setting-up the new plugins in the wp dashboard
- when editing a post, I noticed a link pointing to 'blog.opensourceecology.org'. Checking CF, I see that was a CNAME to 'dedi978.your-server.de'. So I deleted this record & added one for 'blog' as a CNAME to 'www'
</pre>
- 16:47 finished fixing content for knightlab iframes on: /contributors/, /community-true-fans/, & /history-timeline/
- 16:47 confirmed that the dns updated for blog.opensourceecology.org
- attempted to click the 'blog.' link on 'https://www.opensourceecology.org/community-true-fans/', but it resulted in a cert warning of course https://blog.opensourceecology.org/2009/01/towards-1000-global-villages-factor-e-live-distillations-part-1/
- added blog SAN to the cert
certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d www.opensourceecology.org -d blog.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* nginx -t && service nginx reload
- I got an error. I confirmed that let's encrypt supports CNAME, but perhaps I just need to wait longer for propagation https://security.stackexchange.com/questions/134354/why-does-acme-lets-encrypt-require-a-records-not-cname/134389
- 16:56 I confirmed that the system-wide backup on hetzner2 is complete (after 148 minutes), but it's still running on hetzner1
- 17:16 finished updating widgets from "text" to "custom html" on relevant workshops = /cnc-torch-table-workshop/, /eco-tractor_workshop/, /3d-printer-construction-set-workshop-3/
- re-tried the certbot renewal. failed again. actually, I think it's because nginx isn't directing it correctly
- updated the nginx & httpd configs with aliases for 'blog.opensourceecology.org'
- updated the varnish config to use a regex match
- restarted apache, varnish, and nginx
- that worked! the cert now has a SAN for 'blog.opensourceecology.org'
- attempted to click the 'blog.' link on 'https://www.opensourceecology.org/community-true-fans/'; this time it didn't result in a cert error, just a 404 *facepalm* https://blog.opensourceecology.org/2009/01/towards-1000-global-villages-factor-e-live-distillations-part-1/
- got the same on www, *shurg* https://www.opensourceecology.org/2009/01/towards-1000-global-villages-factor-e-live-distillations-part-1/
- 19:03 finished configuring nginx to point wiki requests sent to hetzner2 back to the prod wiki in /etc/nginx/conf.d/www.opensourceecology.org.conf
#################### # REDIRECT TO WIKI # #################### # this is for backwards-compatibility; before ~ 2018-02, both the wiki and # this site shared the same domain-name. So, just in case someone sends # www.opensourceecology.org a query trying to find the wiki, let's send them # to the right place.. location ~* '^/wiki/' { # TODO: change this to 'https://wiki.opensourceecology.org' after the wiki # has been migrated return 301 http://opensourceecology.org$uri; }
- 19:11 fixed the facebook iframe on the main page from 'http' to 'https'
- 19:13 the hetzner1 system backup is *still* not finished!
- 19:24 added a redirect from the opensour
# 2018-03-01: osemain was migrated to hetzner2 today; this rule is a catch-all # to send non-wiki traffic to our hetzner2 server RewriteCond %{REQUEST_URI} !(/wiki|/w) [NC] RewriteRule ^(.*)$ https://www.opensourceecology.org/$1 [L,R=301,NC]
- confirmed that correctly requesting on the new www.opensourceecology.org site works https://www.opensourceecology.org/cnc-torch-table-workshop/
- confirmed that correctly requesting on the current wiki works http://opensourceecology.org/wiki/User:Maltfield
- confirmed that incorrectly requesting the wiki from the 'www' works https://www.opensourceecology.org/wiki/User:Maltfield
- confirmed that incorrectly requesting osemain from the naked domain (which is now just the wiki) works http://opensourceecology.org/cnc-torch-table-workshop/
- came back later in the evening & found the upload finished
- moved all of the old osemain wordpress files out of the docroot on on hetzner1:/usr/home/osemain/public_html into hetzner1:/usr/home/osemain/noBackup/deleteMeIn2019/osemain_olddocroot/
- fixed the image on /marcin-jakubowski/ to use 'https://www.opensourceecology.org/...' for Marcin's photo instead of 'http://opensourceecology.org/...'
- confirmed that clicking the 'update' button didn't clear varnish for the page
- confirmed that clicking the "Purge from Varnish" button successfully cleared the single varnish page
- replaced the "Site Logo Image" URL at Appearance -> Theme Options -> Site Logo from 'http://opensourceecology.org/wp-content/uploads/2014/02/OSE_yellow-copy2.png' to 'https://www.opensourceecology.org/wp-content/uploads/2014/02/OSE_yellow-copy2.png'
- first I had to fix a modsec false-positives
- 950001, sqli
- 973336, xss
- 958051, xss
- 973331, xss
- 973330, xss
- first I had to fix a modsec false-positives
- began researching rrdtool-like solution
- rrdtool vs mrtg vs cacti vs zabbix
- I'm leaning to cacti
- began installing cacti
- installed depends
yum install php56w-snmp net-snmp net-snmp-utils cacti
- updated the TODO section of OSE_Server#TODO
Tue Feb 27, 2018
- still no word back from Marcin on the resolution of all outstanding issues on osemain & the wiki
- returned to hardening mediawiki
- $vgVerifyMimeTypes was set to false in a block within LocalSettings.php:
# Workaround for bug in MediaWiki 1.16 # See http://www.mediawiki.org/wiki/Thread:Project:Support_desk/Error:_File_extension_does_not_match_MIME_type $wgVerifyMimeType = false; $wgStrictFileExtensions = false; $wgCheckFileExtensions = false; ##file extensions # Workaround for bug in MediaWiki 1.16 # See http://www.mediawiki.org/wiki/Thread:Project:Support_desk/Error:_File_extension_does_not_match_MIME_type $wgVerifyMimeType = false; $wgStrictFileExtensions = false; $wgCheckFileExtensions = false; $wgFileExtensions = array( 'png', 'gif', 'jpg', 'jpeg', 'ppt', 'pdf', 'psd', 'mp3','xls', 'swf', 'doc', 'odt', 'odc', 'odp', 'ods', 'odg', 'pod', 'mm', 'xls', 'mpp', 'svg', 'dxf', 'stp', 'blend', 'g', 'FCStd', 'dia', 'bz2', 'gz', 'tbz', 'tgz', '7z', 'xz'); # See also: http://www.mediawiki.org/wiki/Manual:$wgMimeTypeBlacklist
- I hardened this, replacing it with
################ # FILE UPLOADS # ################ # MIME $wgVerifyMimeType = true; # EXTENSIONS $wgStrictFileExtensions = true; $wgCheckFileExtensions = true; $wgFileExtensions = array( 'png', 'gif', 'jpg', 'jpeg', 'ppt', 'pdf', 'psd', 'mp3','xls', 'doc', 'odt', 'odc', 'odp', 'ods', 'odg', 'pod', 'mm', 'xls', 'mpp', 'svg', 'dxf', 'stp', 'blend', 'g', 'FCStd', 'dia', 'bz2', 'gz', 'tbz', 'tgz', '7z', 'xz');
- per the mediawiki guide on file uploads, we may still have issues with compressed archives (ie: zip), and require adding them to the wgTrustedMediaFormats array https://www.mediawiki.org/wiki/Manual:Configuring_file_uploads#Configuring_file_types
- reading through all the upload-related variables https://www.mediawiki.org/wiki/Category:Upload_variables
- found the notion of a "Copy Upload" which means a subset of users can upload directly from a URL from a specified set of domains (ie: flickr) it uses the upload_by_url permission (AKA "Sideloading"_ https://www.mediawiki.org/wiki/Manual:$wgAllowCopyUploads
- found there's both an UploadWizard and a Upload dialog. The former is built-into mediawiki and the latter is a popular extension for uploading multiple files at once
- found a cool option for segregating images into dirs by hashing their name (for scaling past FS issues), but apparently we can't enable it after the first image has been uploaded; may need to be revisited if issues are encountered in the future https://www.mediawiki.org/wiki/Manual:$wgHashedUploadDirectory
- found a good read describing the upload process https://www.mediawiki.org/wiki/Manual:Configuring_file_uploads#Upload_directory
- found another list of upload-related variables https://www.mediawiki.org/wiki/Manual:Configuration_settings#Uploads
- did a deep clean-up of LocalSettings.php
- began researching 2FA for mediawiki
- this guide suggests that the main TOTP extension is OATHAuth https://www.mediawiki.org/wiki/Help:Two-factor_authentication
- added the OATHAuth extension to our meiawiki install
[root@hetzner2 extensions]# date Tue Feb 27 19:11:14 UTC 2018 [root@hetzner2 extensions]# pwd /var/www/html/wiki.opensourceecology.org/htdocs/extensions [root@hetzner2 extensions]# git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/OATHAuth.git Cloning into 'OATHAuth'... remote: Counting objects: 164, done remote: Finding sources: 100% (32/32) remote: Getting sizes: 100% (11/11) remote: Compressing objects: 100% (24018/24018) remote: Total 2210 (delta 13), reused 2200 (delta 12) Receiving objects: 100% (2210/2210), 405.31 KiB | 0 bytes/s, done. Resolving deltas: 100% (1561/1561), done. [root@hetzner2 extensions]#
- that failed as the extension said it required mediawiki >= v 1.31.0...but the latest stable version is 1.30!
==> wiki.opensourceecology.org/error_log <== [Tue Feb 27 19:19:43.898430 2018] [:error] [pid 21700] [client 127.0.0.1:51702] PHP Fatal error: Uncaught exception 'Exception' with message 'OATHAuth is not compatible with the current MediaWiki core (version 1.30.0), it requires: >= 1.31.0.' in /var/www/html/wiki.opensourceecology.org/htdocs/includes/registration/ExtensionRegistry.php:261\nStack trace:\n#0 /var/www/html/wiki.opensourceecology.org/htdocs/includes/registration/ExtensionRegistry.php(148): ExtensionRegistry->readFromQueue(Array)\n#1 /var/www/html/wiki.opensourceecology.org/htdocs/includes/Setup.php(40): ExtensionRegistry->loadFromQueue()\n#2 /var/www/html/wiki.opensourceecology.org/htdocs/includes/WebStart.php(114): require_once('/var/www/html/w...')\n#3 /var/www/html/wiki.opensourceecology.org/htdocs/index.php(40): require('/var/www/html/w...')\n#4 {main}\n thrown in /var/www/html/wiki.opensourceecology.org/htdocs/includes/registration/ExtensionRegistry.php on line 261
- sent an email to the author, Ryan Lane, asking where I can submit a bug report for the extension https://www.mediawiki.org/wiki/User:Ryan_lane
- err, I just noticed that his page lists that extension as "unmaintained" :(
- indeed, 2016-10 was the last update
- actually, that documentation is just out-of-date. The file was updated earlier this month on 2018-02-12 by Jayprakash12345 https://phabricator.wikimedia.org/rEOATd7bdefda31161ad680dd2a88993e3d17734c4bef
- the relevant bug is here https://phabricator.wikimedia.org/T187037
- I OAUTH approved the Phabricator app to access my mediawiki account and was successfully able to submit a reply to the dev that made the relevant breaking change on the relevant bug https://phabricator.wikimedia.org/T187037
- within the hour, Sam Reed (Reedy)--who works for the Wikimedia Foundation--responded stating that the documentation is incorrect, and that I should have instead checked-out the "REL1_30" branch during the clone https://en.wikipedia.org/wiki/User:Reedy
- attempted to download again
[root@hetzner2 extensions]# git clone -b REL1_30 https://gerrit.wikimedia.org/r/p/mediawiki/extensions/OATHAuth.git Cloning into 'OATHAuth'... remote: Counting objects: 164, done remote: Finding sources: 100% (32/32) remote: Getting sizes: 100% (11/11) remote: Compressing objects: 100% (24018/24018) remote: Total 2210 (delta 13), reused 2200 (delta 12) Receiving objects: 100% (2210/2210), 405.31 KiB | 0 bytes/s, done. Resolving deltas: 100% (1561/1561), done. [root@hetzner2 extensions]#
- that fixed the error, but logging-in fails
Database error A database query error has occurred. This may indicate a bug in the software.[WpXJwYmXdCJvWua7xeiBhQAAAAA] 2018-02-27 21:12:33: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"
- attempted to run `php update.php` from the maintenance dir (as has fixed this previously), but I got a syntax error. Indeed, this was due to my faulty sed find-and-replace of ini_set calls
if (#ini_set( "memory_limit", $newLimit ) === false ) {
- fortunately, that appeared to be a one-time/first-pass issue due my earlier/failed regex arguments when testing the sed command. I removed the comment, then retried the final one that I actually documented, and it worked fine.
- I retried `php update.php, and it ran fine.
- I retried the login, and it also worked fine after the update.
- updated the documentation to include the logic to checkout the appropriate branch when checking out the extensions from git
- removed all git-installed extension dirs & downloaded them all again. verified that they all work. ran update.php, & successfully logged-in again
- went to my user settings & saw a new option to enable 2FA. did it successfully.
- logged-out, attempted to log back in. after entering my username & password, I was sent to another page asking for my TOTP token. I entered it, and I logged in successfully.
- I logged-out, and attempted to log back in again. This time I intentionally entered the wrong password, and it told me this without ever showing me the TOTP field.
- I attempted to login again. This time I used the correct password & an intentionally incorrect TOTP token. I was given an error "Verification failed" on the TOTP "Token" step. I entered the correct token, and I logged in successfully.
- note that I was not asked to type my password again after the totp token was entered incorrectly; that's kinda nice.
- begun researching how to require users (ie: all Administrators!) to use 2FA
- found a list of tasks for this extension https://phabricator.wikimedia.org/tag/mediawiki-extensions-oathauth/
- I set $wgSecureLogin to true in LocalSettings.php per https://phabricator.wikimedia.org/T55197
- found the specific task tracking this ask (forcing users to use 2FA). It was created over a year ago (2016-11) and last updated
Sun Feb 25, 2018
- updated the weekly meeting slides per Marcin's request
- Began investigating (9) Page Information = why the Page Information metadata article's "Display Title" field is blank
- Found relevant documentation on this field
- Discovered at least 1 page where it is *not* blank https://wiki.opensourceecology.org/wiki/CNC_Machine
- I deleted the entire contents of this page, and it still wasn't blank. So it wasn't the contents of the article that filled-in the field
- For comparison, Marcin's biography's article page's "Display title" field is empty https://wiki.opensourceecology.org/wiki/Marcin_Biography
- I tried adding 'DISPLAYTITLE:Marcin Biography' to the head of the article, but it remained blank
- I tried adding 'DISPLAYTITLE:Marcin Biography' to the tail of the article, and it filled-in!
- I removed the above, saved, and it went back to blank so something in the article appears to be blanking it out
- I removed the line 'OrigLang', and the field came back again. That appears to be the culprit.
- So this is the culprit https://wiki.opensourceecology.org/index.php?title=Template:OrigLang
- This was the work of MarkDilley, Venko, & Elifarley
- Confirmed that the spanish version of the article *does* have the filed properly set https://wiki.opensourceecology.org/wiki/Marcin_Biography/es
- I was not able to find an obvious solution to the OrigLang template, though I'm no expert in mediawiki templates
- Sent an email to Marcin describing the issue & asking if it has further implications and if it's a blocker for the migration
- Sent Marcin an email follow-up asking for validation that the (1b) Non-existent File redirects to front page issue has been fixed as well, following the varnish config update
- confirmed that awstats is working as expected for the fef & wiki sites
Sat Feb 24, 2018
- Marcin responded that he got past the 2FA login on osemain, but got a forbidden error when attempting to create new blog posts
- confirmed modsec issue
- whitelisted 973337, xss
- confirmed the fix; emailed Marcin for another validation pass
- Marcin responded to the wiki issues status. I'll list them all for a status update here in my log
- (1a) Can't upload files.
- (1b) Non-existent File redirects to front page = "If I use File:filename.jpg, it takes me to the front page of the wiki."
- (2) Content Missing (Youtube embeds?)
- Solved by sed find-and-replace in db.sql file before CREATE DATABASE is issued
- And Marcin confirmed that the only issue on this page was in-fact the missing youtube videos
- (3) Youtube Embeds
- Solved by sed find-and-replace in db.sql file before CREATE DATABASE is issued
- (4) Issuu Embeds
- Solved by sed find-and-replace in db.sql file before CREATE DATABASE is issued
- (5) Scrumy Embeds
- Solved by sed find-and-replace in db.sql file before CREATE DATABASE is issued
- (6) Recent Changes issues
- Marcin says there's not many changes, and that's the issue. It is a fork of the other site, so I should confirm if this is really an issue
- (7) Ted Embeds
- Solved by sed find-and-replace in db.sql file before CREATE DATABASE is issued
- (8) Vimeo Embeds
- Solved by sed find-and-replace in db.sql file before CREATE DATABASE is issued
- (9) Page information
- Page information for all pages is slightly corrupted, and it never shows number of views - https://wiki.opensourceecology.org/index.php?title=Main_Page&action=info
- The Page Views missing is "won't fix" since this was removed by Mediawiki on 2014-10-20 in their release of mediawiki v1.25
- The "Display Title" field is still blank here
- (10) Popular Pages Missing
- Solved as "won't fix" since this was removed by Mediawiki on 2014-10-20 in their release of mediawiki v1.25
- (11) Page Statistics
- Solved as "won't fix" since this was removed by Mediawiki on 2014-10-20 in their release of mediawiki v1.25
- Therefore, the following wiki issues are outstanding:
- (1a) Can't upload files.
- (1b) Non-existent File redirects to front page = "If I use File:filename.jpg, it takes me to the front page of the wiki."
- (6) Recent Changes issues
- (9) Page information
- Emailed Marcin asking for confirmation of the fix for (1a), which I fixed already via permissions
- Began investigating (1b) again
- Discovered that it's a 301 redirect from our server, but only for the get variable URL. For example:
user@personal:~$ curl -I "https://wiki.opensourceecology.org/index.php?title=Special:Upload" HTTP/1.1 200 OK Server: nginx Date: Sun, 25 Feb 2018 00:30:28 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive X-Content-Type-Options: nosniff Content-language: en X-UA-Compatible: IE=Edge Link: </images/ose-logo.png?be82f>;rel=preload;as=image X-Frame-Options: DENY Vary: Accept-Encoding,Cookie Expires: Thu, 01 Jan 1970 00:00:00 GMT Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache X-XSS-Protection: 1; mode=block Set-Cookie: cpPosTime=1519518628.2754; expires=Sun, 25-Feb-2018 00:31:28 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly X-Varnish: 1326749 Age: 0 Via: 1.1 varnish-v4 Strict-Transport-Security: max-age=15552001 Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report" user@personal:~$ curl -I "https://wiki.opensourceecology.org/index.php?title=Special:Upload&wpDestFile=New.jpg" HTTP/1.1 301 Moved Permanently Server: nginx Date: Sun, 25 Feb 2018 00:30:38 GMT Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive X-Content-Type-Options: nosniff Vary: Accept-Encoding,Cookie Expires: Thu, 01 Jan 1970 00:00:00 GMT Cache-Control: private, must-revalidate, max-age=0 Last-Modified: Sun, 25 Feb 2018 00:30:38 GMT Location: https://wiki.opensourceecology.org/wiki/Main_Page X-XSS-Protection: 1; mode=block Set-Cookie: cpPosTime=1519518638.9758; expires=Sun, 25-Feb-2018 00:31:38 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly X-Varnish: 1301183 Age: 0 Via: 1.1 varnish-v4 Strict-Transport-Security: max-age=15552001 Public-Key-Pins: pin-sha256="UbSbHFsFhuCrSv9GNsqnGv4CbaVh5UV5/zzgjLgHh9c="; pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="; pin-sha256="Vjs8r4z+80wjNcr1YKepWQboSIRi63WsWXhIMN+eWys="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="Y9mvm0exBk1JoQ57f9Vm28jKo5lFm/woKcVxrYxu80o="; pin-sha256="EGn6R6CqT4z3ERscrqNl7q7RCzJmDe9uBhS/rnCHU="; pin-sha256="NIdnza073SiyuN1TUa7DDGjOxc1p0nbfOCfbxPWAZGQ="; pin-sha256="fNZ8JI9p2D/C+bsB3LH3rWejY9BGBDeW0JhMOiMfa7A="; pin-sha256="oyD01TTXvpfBro3QSZc1vIlcMjrdLTiL/M9mLCPX+Zo="; pin-sha256="0cRTd+vc1hjNFlHcLgLCHXUeWqn80bNDH/bs9qMTSPo="; pin-sha256="MDhNnV1cmaPdDDONbiVionUHH2QIf2aHJwq/lshMWfA="; pin-sha256="OIZP7FgTBf7hUpWHIA7OaPVO2WrsGzTl9vdOHLPZmJU="; max-age=3600; includeSubDomains; report-uri="http:opensourceecology.org/hpkp-report" user@personal:~$
- I tried the curl again, and found that the second query (with the wpDestFile GET variable appended) caused the *entire* GET variable sequence to be stripped by the time it made it to the mediawiki
# this is when I do `curl -I "https://wiki.opensourceecology.org/index.php?title=Special:Upload&wpDestFile=New.jpg"` [root@hetzner2 wiki.opensourceecology.org]# tail -f wiki-error.log IP: 127.0.0.1 Start request GET /index.php HTTP HEADERS: X-REAL-IP: 173.234.159.236 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.38.0 ACCEPT: */* X-FORWARDED-FOR: 173.234.159.236, 127.0.0.1, 127.0.0.1 HASH: #wiki.opensourceecology.org ACCEPT-ENCODING: gzip X-VARNISH: 738663 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: SqlBagOStuff, session: SqlBagOStuff ... # this is when I do `curl -I "https://wiki.opensourceecology.org/index.php?title=Special:Upload" [root@hetzner2 wiki.opensourceecology.org]# tail -f wiki-error.log IP: 127.0.0.1 Start request GET /index.php?title=Special:Upload HTTP HEADERS: X-REAL-IP: 173.234.159.236 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: curl/7.38.0 ACCEPT: */* X-FORWARDED-FOR: 173.234.159.236, 127.0.0.1, 127.0.0.1 HASH: #wiki.opensourceecology.org ACCEPT-ENCODING: gzip X-VARNISH: 738668 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: SqlBagOStuff, session: SqlBagOStuff ...
- so the request mediawiki sees changes form "GET /index.php?title=Special:Upload" to "GET /index.php" when the second variable is added, which is why it just responds with a 301 redirect to the main page.
- confirmed that the correct request does make it to nginx
==> wiki.opensourceecology.org/access.log <== 173.234.159.236 - - [25/Feb/2018:00:39:24 +0000] "HEAD /index.php?title=Special:Upload HTTP/1.1" 200 0 "-" "curl/7.38.0" "-" 173.234.159.236 - - [25/Feb/2018:00:39:30 +0000] "HEAD /index.php?title=Special:Upload&wpDestFile=New.jpg HTTP/1.1" 301 0 "-" "curl/7.38.0" "-"
- confirmed that varnish does see the entire request, but then it appears to strip it!
[root@hetzner2 ~]# varnishlog -q "ReqHeader eq 'X-Forwarded-For: 173.234.159.236'" * << Request >> 1301342 - Begin req 1301341 rxreq - Timestamp Start: 1519519251.915667 0.000000 0.000000 - Timestamp Req: 1519519251.915667 0.000000 0.000000 - ReqStart 127.0.0.1 59862 - ReqMethod HEAD - ReqURL /index.php?title=Special:Upload&wpDestFile=New.jpg - ReqProtocol HTTP/1.0 - ReqHeader X-Real-IP: 173.234.159.236 - ReqHeader X-Forwarded-For: 173.234.159.236 - ReqHeader X-Forwarded-Proto: https - ReqHeader X-Forwarded-Port: 443 - ReqHeader Host: wiki.opensourceecology.org - ReqHeader Connection: close - ReqHeader User-Agent: curl/7.38.0 - ReqHeader Accept: */* - ReqUnset X-Forwarded-For: 173.234.159.236 - ReqHeader X-Forwarded-For: 173.234.159.236, 127.0.0.1 - VCL_call RECV - ReqUnset X-Forwarded-For: 173.234.159.236, 127.0.0.1 - ReqHeader X-Forwarded-For: 173.234.159.236, 127.0.0.1, 127.0.0.1 - ReqHeader X-VC-My-Purge-Key: JOaSAn72IJzrykJp1pfEWaECvUU8KvxZJnxSue3repId3qV8wJOHexjtuhi9r6Wv4FH9y9eFfiMjXX6hvxRrVOEWr2IaBVZMZ7ToEz8nLFdRyjyMkUGMANd6MHOzxiTJ - ReqUnset X-VC-My-Purge-Key: JOaSAn72IJzrykJp1pfEWaECvUU8KvxZJnxSue3repId3qV8wJOHexjtuhi9r6Wv4FH9y9eFfiMjXX6hvxRrVOEWr2IaBVZMZ7ToEz8nLFdRyjyMkUGMANd6MHOzxiTJ - ReqURL /index.php - VCL_return hash - VCL_call HASH - ReqHeader hash: #wiki.opensourceecology.org - VCL_return lookup - Debug "XXXX HIT-FOR-PASS" - HitPass 1301282 - VCL_call PASS - VCL_return fetch - Link bereq 1301343 pass - Timestamp Fetch: 1519519252.049599 0.133932 0.133932 - RespProtocol HTTP/1.0 - RespStatus 301 - RespReason Moved Permanently - RespHeader Date: Sun, 25 Feb 2018 00:40:51 GMT - RespHeader Server: Apache - RespHeader X-Content-Type-Options: nosniff - RespHeader Vary: Accept-Encoding,Cookie - RespHeader Expires: Thu, 01 Jan 1970 00:00:00 GMT - RespHeader Cache-Control: private, must-revalidate, max-age=0 - RespHeader Last-Modified: Sun, 25 Feb 2018 00:40:52 GMT - RespHeader Location: https://wiki.opensourceecology.org/wiki/Main_Page - RespHeader X-XSS-Protection: 1; mode=block - RespHeader Set-Cookie: cpPosTime=1519519252.0366; expires=Sun, 25-Feb-2018 00:41:52 GMT; Max-Age=60; path=/; secure; httponly;HttpOnly - RespHeader Content-Type: text/html; charset=utf-8 - RespHeader X-VC-Req-Host: wiki.opensourceecology.org - RespHeader X-VC-Req-URL: /index.php - RespHeader X-VC-Req-URL-Base: /index.php - RespHeader X-VC-Cacheable: NO:Not cacheable, ttl: 0.000 - RespProtocol HTTP/1.1 - RespHeader X-Varnish: 1301342 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - RespUnset X-VC-Req-Host: wiki.opensourceecology.org - RespUnset X-VC-Req-URL: /index.php - RespUnset X-VC-Req-URL-Base: /index.php - RespHeader X-VC-Cache: MISS - RespUnset X-VC-Cache: MISS - RespUnset X-VC-Cacheable: NO:Not cacheable, ttl: 0.000 - VCL_return deliver - Timestamp Process: 1519519252.049641 0.133974 0.000043 - Debug "RES_MODE 0" - RespHeader Connection: close - Timestamp Resp: 1519519252.049679 0.134012 0.000037 - Debug "XXX REF 1" - ReqAcct 270 0 270 615 0 615 - End
- I checked the varnish config, and it looked like I had used the wordpress varnish config as a place-holder. That must have been stripping the variable in the request somehow. When I reduced the logic to a base config (with vhost support) by copying from the awstats configs, it fixed the issue.
- added php engine off declaration (as was used in mediawiki apache vhost config file) to the seedhome vhost config file
Tue Feb 20, 2018
- began to investigate (10) Popular Pages Missing
- found that the 'includes/specials/SpecialPopularpages.php' file exists on hetzner1 but not the fresh install on hetzner2
# hetzner1 osemain@dedi978:~/public_html/w$ date Tue Feb 20 20:23:00 CET 2018 osemain@dedi978:~/public_html/w$ pwd /usr/home/osemain/public_html/w osemain@dedi978:~/public_html/w$ ls includes/specials | grep -i popular SpecialPopularpages.php osemain@dedi978:~/public_html/w$ # hetzner2 [root@hetzner2 htdocs]# date Tue Feb 20 19:22:45 UTC 2018 [root@hetzner2 htdocs]# pwd /var/www/html/wiki.opensourceecology.org/htdocs [root@hetzner2 htdocs]# ls includes/specials | grep -i popular [root@hetzner2 htdocs]#
- I read that you can choose whether or not to enable counters with 'wgDisableCounter', so I grepped the entire new install dir for this string and found 1 result in includes/Setup.php:
[root@hetzner2 htdocs]# grep -irC 3 'wgDisableCounter' includes/ includes/Setup.php- includes/Setup.php-// We don't use counters anymore. Left here for extensions still includes/Setup.php-// expecting this to exist. Should be removed sometime 1.26 or later. includes/Setup.php:if ( !isset( $wgDisableCounters ) ) { includes/Setup.php: $wgDisableCounters = true; includes/Setup.php-} includes/Setup.php- includes/Setup.php-if ( $wgMainWANCache === false ) { [root@hetzner2 htdocs]#
- So it may have been removed. Next question: why?
- Found the merge that removed it from the mediawiki-commits mailing list https://www.mail-archive.com/mediawiki-commits@lists.wikimedia.org/msg193945.html
- Found the RFC that proposed removal of PopularPages https://www.mediawiki.org/wiki/Requests_for_comment/Removing_hit_counters_from_MediaWiki_core
- Some people in there say it's not accurate as it is simply an integer, and cannot provide unique hits. Their solution is to use Google Analytics or Piwik.
- Also found this in the changelog for the mediawiki v1.25 release https://www.mediawiki.org/wiki/MediaWiki_1.25#Hit_counters_removed
- In fact, this would be a useless figure, as mediawiki is only going to increment when varnish hits it. For our setup, this really does need to be on awstats monitoring nginx.
- I updated our awstats config to include our wiki site
[root@hetzner2 awstats]# cat /etc/awstats/awstats.wiki.opensourceecology.org.conf LogFile="/var/log/nginx/wiki.opensourceecology.org/access.log" SiteDomain="wiki.opensourceecology.org" HostAliases="wiki.opensourceecology.org 138.201.84.223" Include "common.conf" [root@hetzner2 awstats]# [root@hetzner2 awstats]# cat /etc/cron.d/awstats_generate_static_files 06 * * * * root /bin/nice /usr/share/awstats/tools/awstats_updateall.pl -configdir=/etc/awstats/ now 16 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=www.openbuildinginstitute.org -dir=/var/www/html/awstats.openbuildinginstitute.org/htdocs/ 17 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=seedhome.openbuildinginstitute.org -dir=/var/www/html/awstats.openbuildinginstitute.org/htdocs/ 18 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=fef.opensourceecology.org -dir=/var/www/html/awstats.opensourceecology.org/htdocs/ 19 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=www.opensourceecology.org -dir=/var/www/html/awstats.opensourceecology.org/htdocs/ 20 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=wiki.opensourceecology.org -dir=/var/www/html/awstats.opensourceecology.org/htdocs/ [root@hetzner2 awstats]#
- emailed marcin saying that (9), (10), & (11) are "won't fix" and that we should use awstats again
- beginning to investigate (1a) Can't upload files.
- tailed the error logs when I attempted to do an upload, and I got this
==> wiki.opensourceecology.org/error_log <== [Tue Feb 20 20:38:29.746565 2018] [:error] [pid 18604] [client 127.0.0.1:40818] PHP Warning: putenv() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/Setup.php on line 53, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:29.769788 2018] [:error] [pid 18604] [client 127.0.0.1:40818] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/PHPSessionHandler.php on line 126, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:29.770337 2018] [:error] [pid 18604] [client 127.0.0.1:40818] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/PHPSessionHandler.php on line 127, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:29.783048 2018] [:error] [pid 18604] [client 127.0.0.1:40818] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:29.783854 2018] [:error] [pid 18604] [client 127.0.0.1:40818] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 705, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:29.798665 2018] [:error] [pid 18604] [client 127.0.0.1:40818] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:29.799370 2018] [:error] [pid 18604] [client 127.0.0.1:40818] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 705, referer: https://wiki.opensourceecology.org/wiki/Special:Upload ==> wiki.opensourceecology.org/access_log <== 127.0.0.1 - - [20/Feb/2018:20:38:29 +0000] "GET /api.php?action=query&format=json&formatversion=2&titles=File%3AGi%2Ejpg&prop=imageinfo&iiprop=uploadwarning HTTP/1.0" 200 135 "https://wiki.opensourceecology.org/wiki/Special:Upload" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0" ==> wiki.opensourceecology.org/error_log <== [Tue Feb 20 20:38:38.802234 2018] [:error] [pid 18273] [client 127.0.0.1:40834] PHP Warning: putenv() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/Setup.php on line 53, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:38.826196 2018] [:error] [pid 18273] [client 127.0.0.1:40834] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/PHPSessionHandler.php on line 126, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:38.826646 2018] [:error] [pid 18273] [client 127.0.0.1:40834] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/PHPSessionHandler.php on line 127, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:38.839300 2018] [:error] [pid 18273] [client 127.0.0.1:40834] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:38.840009 2018] [:error] [pid 18273] [client 127.0.0.1:40834] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 705, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:38.854396 2018] [:error] [pid 18273] [client 127.0.0.1:40834] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:38.855178 2018] [:error] [pid 18273] [client 127.0.0.1:40834] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 705, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:38.879773 2018] [:error] [pid 18273] [client 127.0.0.1:40834] PHP Warning: set_time_limit() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 3121, referer: https://wiki.opensourceecology.org/wiki/Special:Upload ==> wiki.opensourceecology.org/access_log <== 127.0.0.1 - - [20/Feb/2018:20:38:38 +0000] "POST /wiki/Special:Upload HTTP/1.0" 429 18407 "https://wiki.opensourceecology.org/wiki/Special:Upload" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0" ==> wiki.opensourceecology.org/error_log <== [Tue Feb 20 20:38:49.757596 2018] [:error] [pid 13256] [client 127.0.0.1:40864] PHP Warning: putenv() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/Setup.php on line 53, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:49.781479 2018] [:error] [pid 13256] [client 127.0.0.1:40864] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/PHPSessionHandler.php on line 126, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:49.781864 2018] [:error] [pid 13256] [client 127.0.0.1:40864] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/PHPSessionHandler.php on line 127, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:49.807448 2018] [:error] [pid 13256] [client 127.0.0.1:40864] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:49.808226 2018] [:error] [pid 13256] [client 127.0.0.1:40864] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 705, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:49.853839 2018] [:error] [pid 13256] [client 127.0.0.1:40864] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 693, referer: https://wiki.opensourceecology.org/wiki/Special:Upload [Tue Feb 20 20:38:49.854676 2018] [:error] [pid 13256] [client 127.0.0.1:40864] PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/rdbms/database/Database.php on line 705, referer: https://wiki.opensourceecology.org/wiki/Special:Upload ==> wiki.opensourceecology.org/access_log <== 127.0.0.1 - - [20/Feb/2018:20:38:49 +0000] "GET /load.php?debug=false&lang=en&modules=mediawiki.helplink%2CsectionAnchor%7Cmediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.skinning.interface%7Cskins.vector.styles&only=styles&skin=vector HTTP/1.1" 200 43047 "https://wiki.opensourceecology.org/wiki/Special:Upload" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0"
- And the relevant line in the super-verbose 'wiki-error.log'
UploadBase::verifyFile: all clear; passing. [LocalFile] Failed to lock 'Gi.jpg' MediaWiki::preOutputCommit: primary transaction round committed
- that tells me the source of the error is a script named LocalFile
[root@hetzner2 htdocs]# find . | grep -i localfile ./includes/filerepo/file/LocalFile.php ./includes/filerepo/file/OldLocalFile.php ./includes/filerepo/file/UnregisteredLocalFile.php ./tests/phpunit/includes/filerepo/file/LocalFileTest.php [root@hetzner2 htdocs]#
- checked the obvious: permissions
[root@hetzner2 htdocs]# ls -lah /var/www/html/wiki.opensourceecology.org/htdocs/images | head -n 2 total 352K d---r-x--- 29 not-apache apache 4.0K Feb 14 03:19 .
- so it doesn't have write permissions to the directory
# DECLARE VARIABLES source /root/backups/backup.settings stamp="20180120" backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" dbName_hetzner1='osewiki' dbName_hetzner2='osewiki_db' dbUser_hetzner2="osewiki_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/wiki.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs" [ -d "${docrootDir_hetzner2}/images" ] || mkdir "${docrootDir_hetzner2}/images" chown -R apache:apache "${docrootDir_hetzner2}/images" find "${docrootDir_hetzner2}/images" -type f -exec chmod 0660 {} \; find "${docrootDir_hetzner2}/images" -type d -exec chmod 0770 {} \;
- that's better
[root@hetzner2 htdocs]# ls -lah /var/www/html/wiki.opensourceecology.org/htdocs/images | head -n 2 total 352K drwxrwx--- 29 apache apache 4.0K Feb 14 03:19 .
- I tried the upload again, and it worked fine. That fixes (1a)
- these warnings that our hardened php.ini config is blocking calls to unsafe & disabled functions like ini_set() & putenv() clogging up the logs are annoying. Let's fix it & add it to the migration procedure
find includes/ -type f -exec sed -i 's%^\(\s*\)ini_set\(.*\)%\1#ini_set\2%' '{}' \; find includes/ -type f -exec sed -i 's%^\(\s*\)putenv\(.*\)%\1#putenv\2%' '{}' \;
- that fixed it
- I documented this on Mediawiki#migrate_site_from_hetzner1_to_hetzner2
- began to investigate (1b)
- what's weird here is that this link totally works:
- https://wiki.opensourceecology.org/index.php?title=Special:Upload
- but as soon as you add the "&wpDestFile=New.jpg" to the URL, it sends me to the Main_Page
- when I tail the wiki-error.log file, I see the very beginning of the message shows it's just getting a "GET /index.php" request. So the problem must exist before mediawiki (apache? varnish? nginx?)
- what's weird here is that this link totally works:
IP: 127.0.0.1 Start request GET /index.php HTTP HEADERS: X-REAL-IP: 209.208.216.133 X-FORWARDED-PROTO: https X-FORWARDED-PORT: 443 HOST: wiki.opensourceecology.org USER-AGENT: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0 ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 ACCEPT-LANGUAGE: en-US,en;q=0.5 REFERER: https://wiki.opensourceecology.org/wiki/Hello UPGRADE-INSECURE-REQUESTS: 1 X-FORWARDED-FOR: 209.208.216.133, 127.0.0.1, 127.0.0.1 ACCEPT-ENCODING: gzip HASH: #wiki.opensourceecology.org X-VARNISH: 947159 [caches] cluster: EmptyBagOStuff, WAN: mediawiki-main-default, stash: db-replicated, message: SqlBagOStuff, session: SqlBagOStuff [caches] LocalisationCache: using store LCStoreDB
- the apache logs show "GET /index.php" as well
127.0.0.1 - - [20/Feb/2018:21:39:50 +0000] "GET /index.php HTTP/1.1" 301 - "https://wiki.opensourceecology.org/wiki/Hello" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0" 127.0.0.1 - - [20/Feb/2018:21:39:50 +0000] "GET /wiki/Main_Page HTTP/1.1" 200 71842 "https://wiki.opensourceecology.org/wiki/Hello" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0"
- Found that I can limit varnishlog output by domain:
varnishlog -q "ReqHeader ~ wiki.opensourceecology.org"
- Better, found that I can limit varnishlog output by the client's (my) ip address
varnishlog -q "X-Forwarded-For ~ 209.208.216.133"
- added the above to the useful commands under our varnish documentation page at Web_server_configuration#Useful_Commands
Mon Feb 19, 2018
- discovered that the bug report I submitted to the Open Office Online Suite to add the functionality of hyperlinks to their Impress Online (Google Docs Presentation replacement) got fixed! https://bugs.documentfoundation.org/show_bug.cgi?id=113361
- sent an email to Marcin, suggesting that we should revisit & submit bug reports, so that this could be a viable solution that meets our needs in a couple years.
- Marcin responded with a list of 11 issues with the Staging wiki
- (1a) Can't upload files.
- (1b) Non-existent File redirects to front page = "If I use File:filename.jpg, it takes me to the front page of the wiki."
- (2) Content Missing (Youtube embeds?)
- Conent missing on https://wiki.opensourceecology.org/wiki/Tractor_User_Manual
- (3) Youtube Embeds
- Videos not embedding - https://wiki.opensourceecology.org/wiki/CEB_Press_Fabrication_videos
- (4) Issuu Embeds
- Issuu not embedding on top - https://wiki.opensourceecology.org/wiki/Civilization_Starter_Kit_DVD_v0.01
- (5) Scrumy Embeds
- https://wiki.opensourceecology.org/wiki/Development_Team_Log . Scrumy doesn't appear to embed anywhere throughout wiki - here's another example - https://wiki.opensourceecology.org/wiki/Tractor_Construction_Set_2017
- (6) Recent Changes issues
- Recent Changes doesn't appear to be displaying properly - https://wiki.opensourceecology.org/index.php?title=Special:RecentChanges&limit=500
- (7) Ted Embeds
- Video embed not working on top of page. https://wiki.opensourceecology.org/wiki/Donate
- (8) Vimeo Embeds
- Video not embedded - https://wiki.opensourceecology.org/wiki/OSEmail
- (9) Page information
- Page information for all pages is slightly corrupted, and it never shows number of views - https://wiki.opensourceecology.org/index.php?title=Main_Page&action=info
- (10) Popular Pages Missing
- Popular Pages is missing from Data and tools section of https://wiki.opensourceecology.org/wiki/Special:SpecialPages
- (11) Page Statistics
- https://wiki.opensourceecology.org/wiki/Special:Statistics doesn't show page view statistics.
- I iterated through the list & attempted to reproduce the issues
- (1a) Can't upload files.
- I was able to reproduce this. Attempting to upload an issue resulted in the error message: 'Action Failed. Could not open lock file for "mwstore://local-backend/local-public/7/78/Image.jpg".'
- (1b) Non-existent File redirects to front page
- Marcin's test page has 2 links to non-existant files. Both redirect to the main page https://wiki.opensourceecology.org/wiki/Hello
- (2) Content Missing (Youtube Embeds?)
- I confirmed that the youtube video embed was missing. Replacing 'https://www.youtube.com/embed//' with 'https://www.youtube.com/embed/' fixed it.
- (3) Youtube Embeds
- Same as above. Replacing 'https://www.youtube.com/embed//' with 'https://www.youtube.com/embed/' fixed one of the videos.
- (4) Issuu Embeds
- confirmed. got a mosec issue when attempting to update the page, replacing the string 'https://static.issuu.com/webembed/' with 'https://static.issuu.com/webembed/'
- whitelisted 950018, generic attack = "universal pdf xss url detected"
- confirmed the fix of the string replacement above fixed this issue
- (5) Scrumy Embeds
- Confirmed that the embed from 'scrumy.com' is not working on the staging wiki
- attempted to make an edit, but got a modesec issue id = 950001
- whitelisted 950001, sqli attack
- Fixed the issue by replacing the string 'https://scrumy.com/' with 'https://scrumy.com/'
- (6) Recent Changes issues
- Marcin's description was vague, but I do see discrepancies between the 2x pages:
- It looks like the difference is that the diff & hist links moved from the left of the article name to the right of the article name. I should confirm with Marcin if this is the only issue && if it's actually OK
- (7) Ted Embeds
- confirmed that the ted talk embed is not loading
- attempted to fix it, but got a modesec forbidden on 958008
- whitelisted 958008, xss
- whitelisted 973329, xss
- fixed with replacing the string 'https://embed.ted.com/' with 'https://embed.ted.com/'
- also discovered that the flattr embed is broken on the Donate page. This is a consequence of us removing the Flattr Mediawiki plugin we were using, which was no longer maintained. I emailed Marcin and asked him to remove all content dependent on the Flattr plugin from the prod wiki.
- (8) Vimeo Embeds
- confirmed that the vimeo video embed is not working
- fixed by replacing the string 'https://player.vimeo.com/' with 'https://player.vimeo.com/'
- (9) Page information
- Confirmed that the Information page does not show the number of views, but it does on the prod site. I'm not sure if this was just removed in newer versions of Mediawiki or what. I'll have to investigate this further..
- Not sure what else is "corrupted" per Marcin's email. I'll have to ask him to clarify.
- (10) Popular Pages Missing
- Confirmed that the Popular Pages link is no longer present, but it is on the prod site. Not sure why. I'll have to investigate this further...
- (11) Page Statistics
- Confirmed that the "View statistics" that's present in the prod wiki is missing. May be related to (9). Further investigation is necessary...
- (1a) Can't upload files.
- Attempted to fix (2), (4), (5), and (8) using a simple string replacement using sed
# DECLARE VARIABLES source /root/backups/backup.settings stamp="20180120" backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" dbName_hetzner1='osewiki' dbName_hetzner2='osewiki_db' dbUser_hetzner2="osewiki_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/wiki.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs" pushd "${backupDir_hetzner2}/current" bzip2 -dc ${backupFileName_db_hetzner1} > db.sql # verify the first 2 (non-comment) occurances of $dbName meet the naming convention of "<siteName>_db vim db.sql # fix youtube embeds fromString='https://www.youtube.com/embed/' toString='https://www.youtube.com/embed/' sed -i "s^$fromString^$toString^g" db.sql # fix issuu embeds fromString='https://static.issuu.com/webembed/' toString='https://static.issuu.com/webembed/' sed -i "s^$fromString^$toString^g" db.sql # fix scrumy embeds fromString='https://scrumy.com/' toString='https://scrumy.com/' sed -i "s^$fromString^$toString^g" db.sql # fix ted embeds fromString='https://embed.ted.com/' toString='https://embed.ted.com/' sed -i "s^$fromString^$toString^g" db.sql # fix vimeo embeds fromString='https://player.vimeo.com/' toString='https://player.vimeo.com/' sed -i "s^$fromString^$toString^g" db.sql time nice mysql -uroot -p${mysqlPass} -sNe "DROP DATABASE IF EXISTS ${dbName_hetzner2};" time nice mysql -uroot -p${mysqlPass} -sNe "CREATE DATABASE ${dbName_hetzner2}; USE ${dbName_hetzner2};" time nice mysql -uroot -p${mysqlPass} < "db.sql" time nice mysql -uroot -p${mysqlPass} -sNe "GRANT SELECT, INSERT, UPDATE, DELETE ON ${dbName_hetzner2}.* TO '${dbUser_hetzner2}'@'localhost' IDENTIFIED BY '${dbPass_hetzner2}'; FLUSH PRIVILEGES;"
- unfortunately, that broke the wiki, resulting in an error
[WotzE6X9Jz1lrHTtbDhlLgAAAAI] 2018-02-20 01:00:03: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"
- running the update script fixed the error above!
pushd "${docrootDir_hetzner2}/maintenance" php update.php
- bonus: this confirms that updates are not an issue using the hardened SELECT, INSERT, UPDATE, DELETE permissions for the db user
- emailed Marcin asking him to validate that I fixed (2), (4), (5), (7), & (8) above
Sun Feb 18, 2018
- documented a quick-start guide for 2FA http://opensourceecology.org/wiki/2FA#Quick_Start_Guide
- emailed Marcin the above guide/documentation
Thr Feb 15, 2018
- began following the mediawiki hardening guide https://www.mediawiki.org/wiki/Manual:Security
- confirmed that our permissions are ideal, and I updated the documentation with this Mediawiki#Proper_File.2FDirectory_Ownership_.26_Permissions
- confirmed that we're denying the only dir owned by the the web server's user (/images/, per necessity of requiring write access) cannot execute php scripts per the vhost config
[root@hetzner2 conf.d]# date Thu Feb 15 15:24:55 UTC 2018 [root@hetzner2 conf.d]# pwd /etc/httpd/conf.d [root@hetzner2 conf.d]# cat 00-wiki.opensourceecology.org.conf ... # don't execute any php files inside the uploads directory <LocationMatch "/images/"> php_flag engine off </LocationMatch> <LocationMatch "/images/.*(?i)\.(cgi|shtml|php3?|phps|phtml)$"> Order Deny,Allow Deny from All </LocationMatch> ... [root@hetzner2 conf.d]#
- yes, we're using tls. yes, we use hsts (not to mention hpkp). ssllabs gives us an A+ on our httpsv setup https://www.ssllabs.com/ssltest/analyze.html?d=wiki.opensourceecology.org
- our php.ini config doesn't mention anything about register_globals. I confirmed that it must be off then, since we're running php v 5.6.31, and it's been the default to be off since v4.2.0 https://secure.php.net/manual/en/ini.core.php#ini.register-globals
- allow_url_fopen is already off
- session.use_trans_sid is already off
- the mysql config already has networking disabled (and iptables won't allow it too)
- changed the mediawiki article on our wiki to only grant "SELECT, INSERT, UPDATE and DELETE" permissions to the osewiki_user user per the hardening guide Mediawiki#migrate_site_from_hetzner1_to_hetzner2
- note this may break mediawiki updates from the maintenance scripts, but we can create a new user with all permissions, if necessary, and pass those directly into the update scripts using --dbuser & --dbpass as described here mediawikiwiki:Manual:Maintenance_scripts#Configuration
- added a variable '$wgSecretKey' to the LocalSettings.php file per mediawikwiki:Manual:$wgSecretKey
- I just used keepassx to give me a [a-zA-Z0-9]{128} string.
- the LocalSettings.php file is already located outside the docroot.
- updated the debugging options in LocalSettings.php to write to a log file outside the docroot
[root@hetzner2 ~]# touch /var/www/html/wiki.opensourceecology.org/wiki-error.log [root@hetzner2 ~]# chown apache:apache /var/log/httpd/wiki.opensourceecology.org/wiki-error.log [root@hetzner2 ~]# ls -lah /var/www/html/wiki.opensourceecology.org/wiki-error.log -rw-r--r-- 1 apache apache 0 Feb 15 16:46 /var/www/html/wiki.opensourceecology.org/wiki-error.log [root@hetzner2 ~]#
Wed Feb 14, 2018
- updated mediawiki migration process from findings yesterday Mediawiki#migrate_site_from_hetzner1_to_hetzner2
- Marcin found that clicking a link to a non-existing file on a wiki incorrectly redirects to the main page. it should link to the page to upload the missing file
- note that the following page *does* load
Tue Feb 13, 2018
- no response from Marcin yet about osemain validation
- confirmed that the awstats sites for opensourceecology.org was still working & got data for 2 consecutive days
- the icons were missing for ose awstats but not obi, so I copied them in-place
[maltfield@hetzner2 html]$ sudo cp -r awstats.openbuildinginstitute.org/htdocs/awstatsicons awstats.opensourceecology.org/htdocs/ [sudo] password for maltfield: [maltfield@hetzner2 html]$
- attempted to modify the wiki install to come straight from the tarball. Unfortunately, we can't do the git solution, since it doesn't include the vendor dir & we can't use Composer without destroying our php.ini security
# DECLARE VARIABLES source /root/backups/backup.settings stamp="20180120" backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" dbName_hetzner1='osewiki' dbName_hetzner2='osewiki_db' dbUser_hetzner2="osewiki_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/wiki.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs" # cleanup current dir, making backups of old contents pushd "${backupDir_hetzner2}" mkdir "old/`date +%Y%m%d`" mv current/* "old/`date +%Y%m%d`" mv "${docrootDir_hetzner2}" "old/`date +%Y%m%d`" # get backups from hetzner1 pushd current scp -P 222 osemain@dedi978.your-server.de:${backupDir_hetzner1}/current/* ${backupDir_hetzner2}/current/ # download mediawiki core source code wget https://releases.wikimedia.org/mediawiki/1.30/mediawiki-1.30.0.tar.gz tar -xzvf mediawiki-1.30.0.tar.gz mkdir "${docrootDir_hetzner2}" rsync -av --progress mediawiki-1.30.0/ "${docrootDir_hetzner2}/" # copy-in our images from backups rsync -av --progress "usr/www/users/osemain/w/images/" "${docrootDir_hetzner2}/images/" # and move the lone image sticking in root into the images directory rsync -av --progress "usr/www/users/osemain/w/ose-logo.png" "${docrootDir_hetzner2}/images/" # create LocalSettings.php that just requires the file from outside the docroot # write multi-line to file for documentation copy & paste cat << EOF > "${docrootDir_hetzner2}/LocalSettings.php" <?php # including separate file that contains the database password so that it is not stored within the document root. # For more info see: # * https://www.mediawiki.org/wiki/Manual:Security # * https://wiki.r00tedvw.com/index.php/Mediawiki/Hardening \$docRoot = dirname( FILE ); require_once "\$docRoot/../LocalSettings.php"; ?> EOF # extensions pushd "${docrootDir_hetzner2}/extensions" git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/CategoryTree.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/ConfirmAccount.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/ConfirmEdit.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Cite.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/ParserFunctions.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Gadgets.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/ReplaceText.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Renameuser.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/UserMerge.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Nuke.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Widgets.git pushd Widgets git submodule init git submodule update popd # skins pushd "${docrootDir_hetzner2}/skins" git clone https://gerrit.wikimedia.org/r/p/mediawiki/skins/CologneBlue.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/skins/Modern.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/skins/MonoBook.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/skins/Vector.git popd # set permissions chown -R not-apache:apache "${vhostDir_hetzner2}" find "${vhostDir_hetzner2}" -type d -exec chmod 0050 {} \; find "${vhostDir_hetzner2}" -type f -exec chmod 0040 {} \; chown not-apache:apache-admins "${vhostDir_hetzner2}/LocalSettings.php" chmod 0040 "${vhostDir_hetzner2}/LocalSettings.php" [ -d "${docrootDir_hetzner2}/images" ] || mkdir "${docrootDir_hetzner2}/images" chown -R apache:apache "${docrootDir_hetzner2}/images" find "${docrootDir_hetzner2}/images" -type f -exec chmod 0660 {} \; find "${docrootDir_hetzner2}/images" -type d -exec chmod 0770 {} \; # attempt to update pushd ${docrootDir_hetzner2}/maintenance php update.php popd
- mediawiki dumbly appears to hard-code "/tmp" as the system's temporary directory in TempFSFile.php
[Wed Feb 14 01:47:56.364862 2018] [:error] [pid 3152] [client 127.0.0.1:54708] PHP Warning: is_dir(): open_basedir restriction in effect. File(/tmp) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/usr/share/php/Composer) in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/filebackend/fsfile/TempFSFile.php on line 90
- the above warning disappears when I tell it to just check the php config--which it really should be doing on its own. I do this by setting the following in LocalSettings.php
# tell mediaiwki to actually ask php what the tmp dir is, rather than assuming # that "/tmp" is the system's temporary directory $wgTmpDirectory = sys_get_temp_dir();
- update.php execution was successful
[root@hetzner2 maintenance]# php update.php ... Done in 0.5 s. [root@hetzner2 maintenance]#
- confirmed that I can login
- attempted to make an edit, but got a modsec forbidden
- 981318, sqli
- confirmed a successful edit!
- the site still looks like shit, probably because the CSS is being 403'd
- https://wiki.opensourceecology.org/load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector
- https://wiki.opensourceecology.org/load.php?debug=false&lang=en&modules=mediawiki.legacy.commonPrint,shared%7Cmediawiki.sectionAnchor%7Cmediawiki.skinning.interface%7Cskins.vector.styles&only=styles&skin=vector
- https://wiki.opensourceecology.org/load.php?debug=false&lang=en&modules=jquery.accessKeyLabel,byteLength,checkboxShiftClick,client,getAttrs,highlightText,mw-jump,suggestions,tabIndex,throttle-debounce%7Cmediawiki.RegExp,Title,api,cldr,jqueryMsg,language,notify,searchSuggest,storage,user,util%7Cmediawiki.api.user,watch%7Cmediawiki.language.data,init%7Cmediawiki.libs.pluralruleparser%7Cmediawiki.page.ready,startup%7Cmediawiki.page.watch.ajax%7Csite%7Cskins.vector.js%7Cuser.defaults&skin=vector&version=19n53uv
- confirmed that the page *does* load without the GET vars https://wiki.opensourceecology.org/load.php
- reduced the issue to just the 'modules' GET var, as the rest load fine https://wiki.opensourceecology.org/load.php?debug=false&lang=en&only=styles&skin=vector
- the manual on this script says it could be htaccess https://www.mediawiki.org/wiki/Manual:Load.php
- indeed, the default install came with an htaccess file:
# Protect against bug 28235 <IfModule rewrite_module> RewriteEngine On RewriteCond %{QUERY_STRING} \.[^\\/:*?\x22<>|%]+(#|\?|$) [nocase] RewriteRule . - [forbidden] </IfModule>
- when I commented-out the above, it loaded fine.
- The bug it mentions (28235) is a XSS vuln in IE 6 https://phabricator.wikimedia.org/T30235
- so it appears that the .htaccess file is triggering a 403 because the query includes a GET variable with a dot (.) in it. Specifically, it's the "modules=site.styles" bit.
- Well, this issue only gives XSS risks to IE6 users. Honestly, I think the best thing to do is to ban IE6 and delete all the .htaccess files with this block
- added this block to the top of LocalSettings.php
# the .htaccess file that ships with mediawiki to prevent XSS attacks on IE6 # clients breaks our CSS loading. OSE's solution = ban IE6 and remove the # offending .htaccess files if(strpos($_SERVER['HTTP_USER_AGENT'], 'MSIE 6.') !== false) { die( "Sorry, you must update your browser to use this site." ); }
- deleted the contents of the htdocs/.htaccess file (the entire file was just this bugfix for 28235
- confirmed that the site css looks good!
- the main page load still triggers a modsec alert
- 981260, sqli
- 950911, generic attack (response splitting?)
- 973302, xss
- 973324, xss
- 973317, xss
- 981255, sqli
- 958057, xss
- 958056, xss
- 973327, xss
- strangely, the sys_get_temp_dir() is wrong! Is this a bug in php?
[root@hetzner2 httpd]# grep -i 'upload_tmp_dir' /etc/php.ini upload_tmp_dir = "/var/lib/php/tmp_upload" [root@hetzner2 httpd]# echo "<?=sys_get_temp_dir();?>" | php /tmp [root@hetzner2 httpd]# echo "<?=php_ini_loaded_file();?>" | php /etc/php.ini [root@hetzner2 httpd]#
- anyway, I just manually set `$wgTmpDirectory = "/var/lib/php/tmp_upload"` in LocalSettings.php
- that fixed all the thumbnail creation issues.
- sent an email to Marcin & Catarina asking for validation
Mon Feb 12, 2018
- updated CHG-2018-02-05 with last week's fixes
- Continuing to fix issues on osemain
- /marcin-jakubowski/
- Marcin pointed out that his headshot is missing on his about page of the staging site
- The offensive html points to
<div class="center"><div class=" image-box aligncenter clearfix"><a href="#" title="Marcin Jakubowski"><img class="photo_frame rounded-img" src="http://opensourceecology.org/wp-content/uploads/2014/02/MarcinHeadshot-300x300.png" alt="Marcin Jakubowski"></a></div>
- I would expect this to be a mixed content error (http over https), but the firefox developer console doesn't mention this image specificially (though it does mention an image from creativecommons.org, licensebuttons.net, and our OSE_yellow-copy2.png logo.
- Anyway, when I replace the src attribute's 'http:' with 'https:osemain.', it works. Because of the name change, I'll just have to address this post-migration. I'll probably end-up doing a database-wide sed for 'http://opensourceecology.org/' to 'https://www.opensourceecology.org/
- was unable to reproduce Marcin's issue clicking the workshop link
- sent marcin an email following-up on his edit issues
- confirmed that the awstats issues are fixed for obi
- added awstats config files for awstats.opensourceecology.org
[root@hetzner2 awstats.opensourceecology.org]# ls -lah /etc/awstats | grep -i opensourceecology -rw-r--r-- 1 root root 175 Feb 12 21:37 awstats.fef.opensourceecology.org.conf -rw-r--r-- 1 root root 175 Feb 12 21:31 awstats.www.opensourceecology.org.conf [root@hetzner2 awstats.opensourceecology.org]#
- added cron jobs for ose sites:
[root@hetzner2 awstats.opensourceecology.org]# cat /etc/cron.d/awstats_generate_static_files 06 * * * * root /bin/nice /usr/share/awstats/tools/awstats_updateall.pl -configdir=/etc/awstats/ now 16 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=www.openbuildinginstitute.org -dir=/var/www/html/awstats.openbuildinginstitute.org/htdocs/ 17 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=seedhome.openbuildinginstitute.org -dir=/var/www/html/awstats.openbuildinginstitute.org/htdocs/ 18 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=fef.opensourceecology.org -dir=/var/www/html/awstats.opensourceecology.org/htdocs/ 19 * * * * root /bin/nice /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=www.opensourceecology.org -dir=/var/www/html/awstats.opensourceecology.org/htdocs/ [root@hetzner2 awstats.opensourceecology.org]#
- fixed permissions on the .htpasswd file next to the docroot (one directory above the docroot)
[root@hetzner2 awstats.opensourceecology.org]# date Mon Feb 12 22:10:47 UTC 2018 [root@hetzner2 awstats.opensourceecology.org]# pwd /var/www/html/awstats.opensourceecology.org [root@hetzner2 awstats.opensourceecology.org]# ls -lah total 16K drwxr-xr-x 3 root root 4.0K Feb 9 21:27 . drwxr-xr-x 16 root root 4.0K Feb 9 20:33 .. drwxr-xr-x 3 root root 4.0K Feb 12 21:42 htdocs -r-------- 1 apache apache 138 Feb 9 21:27 .htpasswd [root@hetzner2 awstats.opensourceecology.org]#
- that really shouldn't be owned by apache (it should owned by a group with apache in it with read-only permissions), but I'll fix that later
- began to test fixes to the docroot permissions such that they're not owned by the apache user, but the apache user has read access
- first, I created the new docroot default file/dir owner: not-apache
[root@hetzner2 awstats.opensourceecology.org]# useradd --home-dir '/dev/null' --shell '/sbin/nologin' not-apache [root@hetzner2 awstats.opensourceecology.org]# tail -n1 /etc/passwd not-apache:x:1012:1013::/dev/null:/sbin/nologin [root@hetzner2 awstats.opensourceecology.org]#
- next, I added the apache user to the apache-admins group, so that it can have read-only permissions to the password-containing config files
- tested the permission files updates on seedhome
vhostDir="/var/www/html/seedhome.openbuildinginstitute.org" wpDocroot="${vhostDir}/htdocs" chown -R not-apache:apache "${vhostDir}" find "${vhostDir}" -type d -exec chmod 0050 {} \; find "${vhostDir}" -type f -exec chmod 0040 {} \; find "${wpDocroot}/wp-content" -type f -exec chmod 0060 {} \; find "${wpDocroot}/wp-content" -type d -exec chmod 0070 {} \; chown not-apache:apache-admins "${vhostDir}/wp-config.php" chmod 0040 "${vhostDir}/wp-config.php"
- after the above, I still had permission issues; that was fixed by restarting apache (which was necessary following the group membership, I guess)
- I tested logging-in & an edit. it worked.
- attempted to upload an image, but it failed
Unable to create directory wp-content/uploads/2017/12. Is its parent directory writable by the server?
- for one, the 'apache' user wasn't in the apache group. I fixed this:
[root@hetzner2 htdocs]# gpasswd -a apache apache Adding user apache to group apache [root@hetzner2 htdocs]# service ^C [root@hetzner2 htdocs]# httpd -t Syntax OK [root@hetzner2 htdocs]# service httpd restart Redirecting to /bin/systemctl restart httpd.service [root@hetzner2 htdocs]#
- still having issues. I guess to create new files/directories, the directory must be owned by the apache user, not just a group with write permission that the apache user is in (that works for read only)
- also, the above commands wouldn't work as a fresh install doesn't necessarily already have the ;uploads' directory, so I should add a mkdir. Updating the process:
vhostDir="/var/www/html/seedhome.openbuildinginstitute.org" wpDocroot="${vhostDir}/htdocs" chown -R not-apache:apache "${vhostDir}" find "${vhostDir}" -type d -exec chmod 0050 {} \; find "${vhostDir}" -type f -exec chmod 0040 {} \; chown not-apache:apache-admins "${vhostDir}/wp-config.php" chmod 0040 "${vhostDir}/wp-config.php" [ -d "${wpDocroot}/wp-content/uploads" ] || mkdir "${wpDocroot}/wp-content/uploads" chown -R apache:apache "${wpDocroot}/wp-content/uploads" find "${wpDocroot}/wp-content/uploads" -type f -exec chmod 0660 {} \; find "${wpDocroot}/wp-content/uploads" -type d -exec chmod 0770 {} \;
- that worked. updated the documentation Wordpress#Proper_File.2FDirectory_Ownership_.26_Permissions
- updated the apache vhost to not execute any php files from within the 'uploads' directory
[root@hetzner2 httpd]# grep -C 3 'uploads' conf.d/00-seedhome.openbuildinginstitute.org.conf </IfModule> </LocationMatch> # don't execute any php files inside the uploads directory <LocationMatch "/wp-content/uploads/.*(?i)\.(cgi|shtml|php3?|phps|phtml)$"> Order Deny,Allow Deny from All </LocationMatch> [root@hetzner2 httpd]#
Thr Feb 09, 2018
- did a theme update. The default themes updated, but the active theme (enigmatic) had no updates (or, at least, it's not available via the api)
[root@hetzner2 www.opensourceecology.org]# sudo -u wp -i wp --path=${docrootDir_hetzner2} theme list ... +------------------------+----------+-----------+---------+ | name | status | update | version | +------------------------+----------+-----------+---------+ | enigmatic.20170714.bak | inactive | none | 3.1 | | enigmatic | active | none | 3.5 | | twentyeleven | inactive | none | 2.7 | | twentyfifteen | inactive | none | 1.9 | | twentyfourteen | inactive | available | 1.0 | | twentyseventeen | inactive | none | 1.4 | | twentysixteen | inactive | none | 1.4 | | twentyten | inactive | none | 2.4 | | twentythirteen | inactive | available | 1.1 | | twentytwelve | inactive | available | 1.3 | +------------------------+----------+-----------+---------+ [root@hetzner2 www.opensourceecology.org]# sudo -u wp -i wp --path=${docrootDir_hetzner2} theme update --all ... Downloading update from https://downloads.wordpress.org/theme/twentyfourteen.2.1.zip... Using cached file '/home/wp/.wp-cli/cache/theme/twentyfourteen-2.1.zip'... Unpacking the update... Installing the latest version... Removing the old version of the theme... Theme updated successfully. Downloading update from https://downloads.wordpress.org/theme/twentythirteen.2.3.zip... Using cached file '/home/wp/.wp-cli/cache/theme/twentythirteen-2.3.zip'... Unpacking the update... Installing the latest version... Removing the old version of the theme... Theme updated successfully. Downloading update from https://downloads.wordpress.org/theme/twentytwelve.2.4.zip... Unpacking the update... Installing the latest version... Removing the old version of the theme... Theme updated successfully. +----------------+-------------+-------------+---------+ | name | old_version | new_version | status | +----------------+-------------+-------------+---------+ | twentyfourteen | 1.0 | 2.1 | Updated | | twentythirteen | 1.1 | 2.3 | Updated | | twentytwelve | 1.3 | 2.4 | Updated | +----------------+-------------+-------------+---------+ Success: Updated 3 of 3 themes. [root@hetzner2 www.opensourceecology.org]#
- awstats for obi was going to '/var/log/nginx/access.log' instead of the vhost-specific '/var/log/nginx/www.openbuildinginstitute.org/access.log' file
- the permissions were fine; a reload fixed it (note the file size changed from 0 to 940 after the reload & I queried the site again
[root@hetzner2 ~]# ls -lah /var/log/nginx/www.openbuildinginstitute.org/ total 19M drw-r--r-- 2 nginx nginx 4.0K Feb 9 18:20 . drwx------ 9 nginx nginx 4.0K Feb 9 04:47 .. -rw-r--r-- 1 nginx nginx 0 Feb 9 04:47 access.log -rw-r--r-- 1 nginx nginx 377 Dec 24 01:30 access.log.1.gz -rw-r--r-- 1 nginx nginx 1.9M Dec 18 03:20 access.log-20171210.gz -rw-r--r-- 1 nginx nginx 1.3M Dec 24 01:20 access.log-20171219.gz -rw-r--r-- 1 nginx nginx 2.6M Jan 3 22:03 access.log-20171225.gz -rw-r--r-- 1 nginx nginx 239K Jan 4 21:36 access.log-20180104.gz -rw-r--r-- 1 nginx nginx 3.2M Jan 18 14:43 access.log-20180105.gz -rw-r--r-- 1 nginx nginx 193K Jan 19 13:03 access.log-20180119.gz -rw-r--r-- 1 nginx nginx 288K Jan 20 20:20 access.log-20180120.gz -rw-r--r-- 1 nginx nginx 4.2M Feb 4 20:19 access.log-20180121.gz -rw-r--r-- 1 nginx nginx 1.7M Feb 9 02:08 access.log-20180205.gz -rw-r--r-- 1 nginx nginx 3.3M Feb 9 18:19 access.log-20180209 -rw-r--r-- 1 nginx nginx 224 Dec 24 01:29 access.log.2.gz -rw-r--r-- 1 nginx nginx 510 Dec 24 01:21 access.log.3.gz -rw-r--r-- 1 nginx nginx 0 Feb 6 06:01 error.log -rw-r--r-- 1 nginx nginx 1.1K Nov 30 17:06 error.log-20171201.gz -rw-r--r-- 1 nginx nginx 1.2K Dec 8 13:12 error.log-20171206.gz -rw-r--r-- 1 nginx nginx 1.8K Dec 12 23:31 error.log-20171210.gz -rw-r--r-- 1 nginx nginx 401 Dec 19 13:45 error.log-20171219.gz -rw-r--r-- 1 nginx nginx 388 Dec 24 05:48 error.log-20171224.gz -rw-r--r-- 1 nginx nginx 3.3K Jan 1 09:19 error.log-20171229.gz -rw-r--r-- 1 nginx nginx 359 Jan 3 23:05 error.log-20180104.gz -rw-r--r-- 1 nginx nginx 3.8K Jan 18 13:55 error.log-20180105.gz -rw-r--r-- 1 nginx nginx 888 Feb 2 10:27 error.log-20180124.gz -rw-r--r-- 1 nginx nginx 319 Feb 5 17:04 error.log-20180206 [root@hetzner2 ~]# service nginx reload Redirecting to /bin/systemctl reload nginx.service [root@hetzner2 ~]# ls -lah /var/log/nginx/www.openbuildinginstitute.org/ total 19M drw-r--r-- 2 nginx nginx 4.0K Feb 9 18:20 . drwx------ 9 nginx nginx 4.0K Feb 9 04:47 .. -rw-r--r-- 1 nginx nginx 940 Feb 9 18:20 access.log -rw-r--r-- 1 nginx nginx 377 Dec 24 01:30 access.log.1.gz -rw-r--r-- 1 nginx nginx 1.9M Dec 18 03:20 access.log-20171210.gz -rw-r--r-- 1 nginx nginx 1.3M Dec 24 01:20 access.log-20171219.gz -rw-r--r-- 1 nginx nginx 2.6M Jan 3 22:03 access.log-20171225.gz -rw-r--r-- 1 nginx nginx 239K Jan 4 21:36 access.log-20180104.gz -rw-r--r-- 1 nginx nginx 3.2M Jan 18 14:43 access.log-20180105.gz -rw-r--r-- 1 nginx nginx 193K Jan 19 13:03 access.log-20180119.gz -rw-r--r-- 1 nginx nginx 288K Jan 20 20:20 access.log-20180120.gz -rw-r--r-- 1 nginx nginx 4.2M Feb 4 20:19 access.log-20180121.gz -rw-r--r-- 1 nginx nginx 1.7M Feb 9 02:08 access.log-20180205.gz -rw-r--r-- 1 nginx nginx 3.3M Feb 9 18:20 access.log-20180209 -rw-r--r-- 1 nginx nginx 224 Dec 24 01:29 access.log.2.gz -rw-r--r-- 1 nginx nginx 510 Dec 24 01:21 access.log.3.gz -rw-r--r-- 1 nginx nginx 0 Feb 6 06:01 error.log -rw-r--r-- 1 nginx nginx 1.1K Nov 30 17:06 error.log-20171201.gz -rw-r--r-- 1 nginx nginx 1.2K Dec 8 13:12 error.log-20171206.gz -rw-r--r-- 1 nginx nginx 1.8K Dec 12 23:31 error.log-20171210.gz -rw-r--r-- 1 nginx nginx 401 Dec 19 13:45 error.log-20171219.gz -rw-r--r-- 1 nginx nginx 388 Dec 24 05:48 error.log-20171224.gz -rw-r--r-- 1 nginx nginx 3.3K Jan 1 09:19 error.log-20171229.gz -rw-r--r-- 1 nginx nginx 359 Jan 3 23:05 error.log-20180104.gz -rw-r--r-- 1 nginx nginx 3.8K Jan 18 13:55 error.log-20180105.gz -rw-r--r-- 1 nginx nginx 888 Feb 2 10:27 error.log-20180124.gz -rw-r--r-- 1 nginx nginx 319 Feb 5 17:04 error.log-20180206 [root@hetzner2 ~]#
- the reload *should* be happening already by the logroate
[root@hetzner2 ~]# cat /etc/logrotate.d/nginx /var/log/nginx/*log /var/log/nginx/*/*log { create 0644 nginx nginx daily rotate 10 missingok notifempty compress sharedscripts prerotate /bin/nice /usr/share/awstats/tools/awstats_updateall.pl now endscript postrotate /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true endscript } [root@hetzner2 ~]#
- I simulated the rotate & kill, and saw the same undesired behaviour
[root@hetzner2 www.openbuildinginstitute.org]# mv access.log access.log.test [root@hetzner2 www.openbuildinginstitute.org]# touch access.log [root@hetzner2 www.openbuildinginstitute.org]# chown nginx:nginx access.log [root@hetzner2 www.openbuildinginstitute.org]# chmod 0644 access.log [root@hetzner2 www.openbuildinginstitute.org]# /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true [root@hetzner2 nginx]# tail -f /var/log/nginx/access.log /var/log/nginx/www.openbuildinginstitute.org/access.log /var/log/nginx/error.log /var/log/nginx/www.openbuildinginstitute.org/error.log ... ==> /var/log/nginx/access.log <== 98.242.98.106 - - [09/Feb/2018:18:30:05 +0000] "GET /wp-content/uploads/2016/02/kitchen_1.jpg HTTP/1.1" 301 178 "https://www.openbuildinginstitute.org/buildings/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0" "-"
- interseting, when I did the kill again, this log popped-up
[root@hetzner2 nginx]# tail -f /var/log/nginx/access.log /var/log/nginx/www.openbuildinginstitute.org/access.log /var/log/nginx/error.log /var/log/nginx/www.openbuildinginstitute.org/error.log ... ==> /var/log/nginx/error.log <== 2018/02/09 18:31:08 [emerg] 26860#0: open() "/var/log/nginx/www.openbuildinginstitute.org/access.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26861#0: open() "/var/log/nginx/www.openbuildinginstitute.org/access.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26866#0: open() "/var/log/nginx/www.openbuildinginstitute.org/access.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26862#0: open() "/var/log/nginx/www.openbuildinginstitute.org/access.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26865#0: open() "/var/log/nginx/www.openbuildinginstitute.org/access.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26859#0: open() "/var/log/nginx/www.openbuildinginstitute.org/access.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26860#0: open() "/var/log/nginx/www.openbuildinginstitute.org/error.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26861#0: open() "/var/log/nginx/www.openbuildinginstitute.org/error.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26866#0: open() "/var/log/nginx/www.openbuildinginstitute.org/error.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26862#0: open() "/var/log/nginx/www.openbuildinginstitute.org/error.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26865#0: open() "/var/log/nginx/www.openbuildinginstitute.org/error.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26859#0: open() "/var/log/nginx/www.openbuildinginstitute.org/error.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26864#0: open() "/var/log/nginx/www.openbuildinginstitute.org/access.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26864#0: open() "/var/log/nginx/www.openbuildinginstitute.org/error.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26863#0: open() "/var/log/nginx/www.openbuildinginstitute.org/access.log" failed (13: Permission denied) 2018/02/09 18:31:08 [emerg] 26863#0: open() "/var/log/nginx/www.openbuildinginstitute.org/error.log" failed (13: Permission denied)
- but the permissions look fine
[root@hetzner2 www.openbuildinginstitute.org]# ls -lah /var/log/nginx/www.openbuildinginstitute.org/access.log -rw-r--r-- 1 nginx nginx 0 Feb 9 18:26 /var/log/nginx/www.openbuildinginstitute.org/access.log
- ah, found the issue thanks to this https://stackoverflow.com/questions/37245926/nginx-write-logs-to-the-old-file-after-running-logrotate
[root@hetzner2 www.openbuildinginstitute.org]# ls -lah /var/log/nginx/ total 9.0M drwx------ 9 nginx nginx 4.0K Feb 9 04:47 . drwxr-xr-x 13 root root 12K Feb 9 04:47 .. -rw-r--r-- 1 nginx nginx 2.2M Feb 9 18:39 access.log -rw-r--r-- 1 nginx nginx 222 Dec 24 01:33 access.log.1.gz -rw-r--r-- 1 nginx nginx 154K Jan 31 07:43 access.log-20180131.gz -rw-r--r-- 1 nginx nginx 186K Feb 1 06:41 access.log-20180201.gz -rw-r--r-- 1 nginx nginx 107K Feb 2 08:54 access.log-20180202.gz -rw-r--r-- 1 nginx nginx 83K Feb 3 05:49 access.log-20180203.gz -rw-r--r-- 1 nginx nginx 81K Feb 4 04:48 access.log-20180204.gz -rw-r--r-- 1 nginx nginx 265K Feb 5 09:12 access.log-20180205.gz -rw-r--r-- 1 nginx nginx 351K Feb 6 06:00 access.log-20180206.gz -rw-r--r-- 1 nginx nginx 458K Feb 7 07:35 access.log-20180207.gz -rw-r--r-- 1 nginx nginx 443K Feb 8 07:21 access.log-20180208.gz -rw-r--r-- 1 nginx nginx 4.0M Feb 9 04:46 access.log-20180209 -rw-r--r-- 1 nginx nginx 420 Dec 24 01:32 access.log.2.gz -rw-r--r-- 1 nginx nginx 207 Dec 24 01:29 access.log.3.gz -rw-r--r-- 1 nginx nginx 187 Dec 24 01:28 access.log.4.gz -rw-r--r-- 1 nginx nginx 1.2K Dec 24 01:28 access.log.5.gz -rw-r--r-- 1 nginx nginx 271 Dec 24 01:21 access.log.6.gz -rw-r--r-- 1 nginx nginx 249K Feb 9 18:37 error.log -rw-r--r-- 1 nginx nginx 211 Dec 24 01:33 error.log.1.gz -rw-r--r-- 1 nginx nginx 9.6K Jan 31 07:42 error.log-20180131.gz -rw-r--r-- 1 nginx nginx 9.0K Feb 1 06:41 error.log-20180201.gz -rw-r--r-- 1 nginx nginx 7.8K Feb 2 08:52 error.log-20180202.gz -rw-r--r-- 1 nginx nginx 6.0K Feb 3 05:49 error.log-20180203.gz -rw-r--r-- 1 nginx nginx 6.2K Feb 4 04:48 error.log-20180204.gz -rw-r--r-- 1 nginx nginx 8.5K Feb 5 09:03 error.log-20180205.gz -rw-r--r-- 1 nginx nginx 9.6K Feb 6 06:00 error.log-20180206.gz -rw-r--r-- 1 nginx nginx 13K Feb 7 07:35 error.log-20180207.gz -rw-r--r-- 1 nginx nginx 13K Feb 8 07:21 error.log-20180208.gz -rw-r--r-- 1 nginx nginx 379K Feb 9 04:46 error.log-20180209 -rw-r--r-- 1 nginx nginx 592 Dec 24 01:33 error.log.2.gz -rw-r--r-- 1 nginx nginx 413 Dec 24 01:29 error.log.3.gz -rw-r--r-- 1 nginx nginx 210 Dec 24 01:28 error.log.4.gz -rw-r--r-- 1 nginx nginx 211 Dec 24 01:27 error.log.5.gz -rw-r--r-- 1 nginx nginx 506 Dec 24 01:24 error.log.6.gz drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 fef.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 forum.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 oswh.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 seedhome.openbuildinginstitute.org drwxr-xr-x 2 nginx nginx 4.0K Feb 8 07:21 wiki.opensourceecology.org drw-r--r-- 2 nginx nginx 4.0K Feb 9 18:26 www.openbuildinginstitute.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 www.opensourceecology.org [root@hetzner2 www.openbuildinginstitute.org]#
- if you look at the above output of the /var/log/nginx contents, it's obvious really. One dir is 0444 while the rest are 0755. We need that containing dir to have execute permissions!
[root@hetzner2 www.openbuildinginstitute.org]# chmod 0755 /var/log/nginx/www.openbuildinginstitute.org [root@hetzner2 www.openbuildinginstitute.org]# ls -lah /var/log/nginx/ total 9.0M drwx------ 9 nginx nginx 4.0K Feb 9 04:47 . drwxr-xr-x 13 root root 12K Feb 9 04:47 .. -rw-r--r-- 1 nginx nginx 2.2M Feb 9 18:41 access.log -rw-r--r-- 1 nginx nginx 222 Dec 24 01:33 access.log.1.gz -rw-r--r-- 1 nginx nginx 154K Jan 31 07:43 access.log-20180131.gz -rw-r--r-- 1 nginx nginx 186K Feb 1 06:41 access.log-20180201.gz -rw-r--r-- 1 nginx nginx 107K Feb 2 08:54 access.log-20180202.gz -rw-r--r-- 1 nginx nginx 83K Feb 3 05:49 access.log-20180203.gz -rw-r--r-- 1 nginx nginx 81K Feb 4 04:48 access.log-20180204.gz -rw-r--r-- 1 nginx nginx 265K Feb 5 09:12 access.log-20180205.gz -rw-r--r-- 1 nginx nginx 351K Feb 6 06:00 access.log-20180206.gz -rw-r--r-- 1 nginx nginx 458K Feb 7 07:35 access.log-20180207.gz -rw-r--r-- 1 nginx nginx 443K Feb 8 07:21 access.log-20180208.gz -rw-r--r-- 1 nginx nginx 4.0M Feb 9 04:46 access.log-20180209 -rw-r--r-- 1 nginx nginx 420 Dec 24 01:32 access.log.2.gz -rw-r--r-- 1 nginx nginx 207 Dec 24 01:29 access.log.3.gz -rw-r--r-- 1 nginx nginx 187 Dec 24 01:28 access.log.4.gz -rw-r--r-- 1 nginx nginx 1.2K Dec 24 01:28 access.log.5.gz -rw-r--r-- 1 nginx nginx 271 Dec 24 01:21 access.log.6.gz -rw-r--r-- 1 nginx nginx 250K Feb 9 18:40 error.log -rw-r--r-- 1 nginx nginx 211 Dec 24 01:33 error.log.1.gz -rw-r--r-- 1 nginx nginx 9.6K Jan 31 07:42 error.log-20180131.gz -rw-r--r-- 1 nginx nginx 9.0K Feb 1 06:41 error.log-20180201.gz -rw-r--r-- 1 nginx nginx 7.8K Feb 2 08:52 error.log-20180202.gz -rw-r--r-- 1 nginx nginx 6.0K Feb 3 05:49 error.log-20180203.gz -rw-r--r-- 1 nginx nginx 6.2K Feb 4 04:48 error.log-20180204.gz -rw-r--r-- 1 nginx nginx 8.5K Feb 5 09:03 error.log-20180205.gz -rw-r--r-- 1 nginx nginx 9.6K Feb 6 06:00 error.log-20180206.gz -rw-r--r-- 1 nginx nginx 13K Feb 7 07:35 error.log-20180207.gz -rw-r--r-- 1 nginx nginx 13K Feb 8 07:21 error.log-20180208.gz -rw-r--r-- 1 nginx nginx 379K Feb 9 04:46 error.log-20180209 -rw-r--r-- 1 nginx nginx 592 Dec 24 01:33 error.log.2.gz -rw-r--r-- 1 nginx nginx 413 Dec 24 01:29 error.log.3.gz -rw-r--r-- 1 nginx nginx 210 Dec 24 01:28 error.log.4.gz -rw-r--r-- 1 nginx nginx 211 Dec 24 01:27 error.log.5.gz -rw-r--r-- 1 nginx nginx 506 Dec 24 01:24 error.log.6.gz drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 fef.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 forum.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 oswh.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 seedhome.openbuildinginstitute.org drwxr-xr-x 2 nginx nginx 4.0K Feb 8 07:21 wiki.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 18:26 www.openbuildinginstitute.org drwxr-xr-x 2 nginx nginx 4.0K Feb 9 04:47 www.opensourceecology.org [root@hetzner2 www.openbuildinginstitute.org]#
- that appears to have fixed it!
- I updated the documentation for creating new wordpress vhosts to use these permissions at the get-go Wordpress#Create_New_Wordpress_Vhost
- began investigating how the hell awstats is accessible on port 443 and, more importantly, how certbot is able to refresh its certificate
- I found the logs of attempting to load 'https://awstats.openbuildinginstitute.org/fuck' in '/var/log/nginx/seedhome.openbuildinginstitute.org/access.log'
[root@hetzner2 nginx]# date Fri Feb 9 19:08:10 UTC 2018 [root@hetzner2 nginx]# pwd /var/log/nginx [root@hetzner2 nginx]# tail -f *.log */*.log | grep -i fuck 98.242.98.106 - - [09/Feb/2018:19:04:10 +0000] "GET /fuck HTTP/1.1" 401 381 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0" "-" [root@hetzner2 nginx]# grep -irl 'fuck' * seedhome.openbuildinginstitute.org/access.log [root@hetzner2 nginx]#
- I guess that site became our default/catch-all, picking up requests on obi's ip address (138.201.84.223) on port 443. And the nginx config file is generic enough to just pass the host header along to varnish using a dynamic variable ($host), so then apache actually serves the correct page.
- so I think the best thing to do is to explicitly define an nginx listen on port 443 for each awstats site, passing to varnish per usual (but logging in the correct place) && the extensible configs in the proper & expected place. Then I'll configure apache to have 2 distinct docroots for X-Forwarded-Port equal to 4443 vs 443. 4443 will give us the htpasswd-protected awstats files. 443 will go to publicly-accessible docroot where certbot will generate files to negotiate https cert updates.
- successfully tested changing the "X-Forwarded-Port 443" line in the nginx config to "X-Forwarded-Port $server_port"
[root@hetzner2 conf.d]# cat /etc/nginx/conf.d/awstats.openbuildinginstitute.org.conf ################################################################################ # File: awstats.openbuildinginstitute.org.conf # Version: 0.2 # Purpose: Internet-listening web server for truncating https, basic DOS # protection, and passing to varnish cache (varnish then passes to # apache) # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2017-11-23 # Updated: 2018-02-09 ################################################################################ server { include conf.d/secure.include; include conf.d/ssl.openbuildinginstitute.org.include; listen 138.201.84.223:4443; listen 138.201.84.223:443; #listen [::]:443; server_name awstats.openbuildinginstitute.org; ############# # SITE_DOWN # ############# # uncomment this block && restart nginx prior to apache work to display the # "SITE DOWN" webpage for our clients #root /var/www/html/SITE_DOWN/htdocs/; #index index.html index.htm; # force all requests to load exactly this page #location / { # try_files $uri /index.html; #} ################### # SEND TO VARNISH # ################### location / { proxy_pass http://127.0.0.1:6081; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; #proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header Host $host; } } [root@hetzner2 conf.d]#
- apache isn't very powerful at conditionals, so I decided to move the logic to nginx & varnish
- updated /etc/httpd/conf/httpd.conf to listen on 127.0.0.1:8010 for the certbot vhost
- successfully added 'awstats.opensourceecology.org' to cert SAN with certbot
[root@hetzner2 ~]# certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d osemain.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org -w /var/www/html/certbot/htdocs -d awstats.opensourceecology.org ... IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/opensourceecology.org/privkey.pem Your cert will expire on 2018-05-10. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew" - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le [root@hetzner2 ~]# /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* [root@hetzner2 ~]# nginx -t && service nginx reload nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Redirecting to /bin/systemctl reload nginx.service [root@hetzner2 ~]#
- so first, nginx is listening on ports '443' && '4443' It sets the 'X-Forwarded-Port' Header to whichever it heard the request come in on (referenced by $server_port)
[root@hetzner2 conf.d]# date Fri Feb 9 21:37:33 UTC 2018 [root@hetzner2 conf.d]# pwd /etc/nginx/conf.d [root@hetzner2 conf.d]# cat awstats.openbuildinginstitute.org.conf ################################################################################ # File: awstats.openbuildinginstitute.org.conf # Version: 0.2 # Purpose: Internet-listening web server for truncating https, basic DOS # protection, and passing to varnish cache (varnish then passes to # apache) # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2017-11-23 # Updated: 2018-02-09 ################################################################################ server { include conf.d/secure.include; include conf.d/ssl.openbuildinginstitute.org.include; listen 138.201.84.223:4443; listen 138.201.84.223:443; server_name awstats.openbuildinginstitute.org; ############# # SITE_DOWN # ############# # uncomment this block && restart nginx prior to apache work to display the # "SITE DOWN" webpage for our clients #root /var/www/html/SITE_DOWN/htdocs/; #index index.html index.htm; # force all requests to load exactly this page #location / { # try_files $uri /index.html; #} ################### # SEND TO VARNISH # ################### location / { proxy_pass http://127.0.0.1:6081; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; #proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header Host $host; } } [root@hetzner2 conf.d]#
- nginx sends its requests to varnish. The varnish config for the awstats configs defines 2x distinct backends. Both are apache, but the ports are different. The standard 8000 name-based-vhost is used for the 4443 port && there's a new vhost running on 8010 specifically for this certbot vhost.
[root@hetzner2 varnish]# date Fri Feb 9 21:39:47 UTC 2018 [root@hetzner2 varnish]# pwd /etc/varnish [root@hetzner2 varnish]# cat sites-enabled/awstats.openbuildinginstitute.org ################################################################################ # File: awstats.openbuildinginstitute.org.vcl # Version: 0.3 # Purpose: Config file for awstats for obi # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2017-11-23 # Updated: 2018-02-09 ################################################################################ vcl 4.0; ########################## # VHOST-SPECIFIC BACKEND # ########################## backend awstats_openbuildinginstitute_org { .host = "127.0.0.1"; .port = "8000"; } backend certbot_openbuildinginstitute_org { .host = "127.0.0.1"; .port = "8010"; } ##################################################################### # STUFF MOSTLY COPIED FROM vcaching WORDPRESS PLUGIN # # https://plugins.svn.wordpress.org/vcaching/trunk/varnish-conf/v4/ # ##################################################################### sub vcl_recv { if ( req.http.host == "awstats.openbuildinginstitute.org" ){ # is this an admin trying to access awstats or certbot trying to renew the # https certificate? if ( req.http.X-Forwarded-Port == "4443" ){ # this is an admin; send them to the auth-protected apache vhost set req.backend_hint = awstats_openbuildinginstitute_org; } else { # this is not an admin accessing the hidden port; send them to the apache # vhost with no content (that certbot uses) set req.backend_hint = certbot_openbuildinginstitute_org; } } } sub vcl_hash { if ( req.http.host == "awstats.openbuildinginstitute.org" ){ # TODO } } sub vcl_backend_response { if ( beresp.backend.name == "awstats_openbuildinginstitute_org" ){ # TODO } } sub vcl_synth { if ( req.http.host == "awstats.openbuildinginstitute.org" ){ # TODO } } sub vcl_pipe { if ( req.http.host == "awstats.openbuildinginstitute.org" ){ # TODO } } sub vcl_deliver { if ( req.http.host == "awstats.openbuildinginstitute.org" ){ # TODO } } [root@hetzner2 varnish]# cat default.vcl ################################################################################ # File: default.vcl # Version: 0.1 # Purpose: Main config file for varnish cache. Note that it's intentionally # mostly bare to allow robust vhost-specific logic. Please see this # for more info: # * https://www.getpagespeed.com/server-setup/varnish/varnish-virtual-hosts # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2017-11-12 # Updated: 2017-11-12 ################################################################################ vcl 4.0; ############ # INCLUDES # ############ # import std; include "conf/acl.vcl"; include "lib/purge.vcl"; include "all-vhosts.vcl"; include "catch-all.vcl"; [root@hetzner2 varnish]# cat all-vhosts.vcl ################################################################################ # File: all-hosts.vcl # Version: 0.8 # Purpose: meta config file that simply imports the site-specific vcl files # stored in the 'sites-enabled' directory Please see this for more info # * https://www.getpagespeed.com/server-setup/varnish/varnish-virtual-hosts # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2017-11-12 # Updated: 2018-02-09 ################################################################################ include "sites-enabled/staging.openbuildinginstitute.org"; include "sites-enabled/awstats.openbuildinginstitute.org"; include "sites-enabled/awstats.opensourceecology.org"; include "sites-enabled/www.openbuildinginstitute.org"; include "sites-enabled/www.opensourceecology.org"; include "sites-enabled/seedhome.openbuildinginstitute.org"; include "sites-enabled/fef.opensourceecology.org"; include "sites-enabled/oswh.opensourceecology.org"; include "sites-enabled/forum.opensourceecology.org"; include "sites-enabled/wiki.opensourceecology.org"; [root@hetzner2 varnish]#
- finally, here's the 2x distinct vhosts in apache
[root@hetzner2 conf.d]# date Fri Feb 9 21:41:31 UTC 2018 [root@hetzner2 conf.d]# pwd /etc/httpd/conf.d [root@hetzner2 conf.d]# cat awstats.openbuildinginstitute.org.conf <VirtualHost 127.0.0.1:8000> ServerName awstats.openbuildinginstitute.org DocumentRoot "/var/www/html/awstats.openbuildinginstitute.org/htdocs" Include /etc/httpd/conf.d/ssl.openbuildinginstitute.org <LocationMatch .*\.(svn|git|hg|bzr|cvs|ht)/.*> Deny From All </LocationMatch> </VirtualHost> <Directory "/var/www/html/awstats.openbuildinginstitute.org/htdocs"> AuthType Basic AuthName "Authentication Required" AuthUserFile /var/www/html/awstats.openbuildinginstitute.org/.htpasswd # it is unnecessary to allow certbot here, since it uses port 80 (which we # send to port 443), but this server only listens on port 4443 (via nginx) #Require expr %{REQUEST_URI} =~ m#^/.well-known/acme-challenge/# Require valid-user Options +Indexes +Includes AllowOverride None Order allow,deny Allow from all # Harden vhost docroot by blocking all request types except the 3 essentials <LimitExcept GET POST HEAD> deny from all </LimitExcept> </Directory> # disable mod_security with rules as needed (found by logs in: # /var/log/httpd/modsec_audit.log <Location "/"> <IfModule security2_module> SecRuleEngine On #SecRuleRemoveById 960015 981173 960024 960904 960015 960017 </IfModule> </Location> [root@hetzner2 conf.d]# cat certbot.conf ################################################################################ # File: certbot.conf # Version: 0.1 # Purpose: localhost-only-listening, http-only, name-based-vhost for serving # an empty vhost that's to be used by certbot for refreshing our Let's # Encrypt certificates. This is necessary for admin-only/private vhosts # that operate on non-standard port with basic auth required. For # example, see the configs for awstats. # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2018-02-09 # Updated: 2018-02-09 ################################################################################ <VirtualHost 127.0.0.1:8010> #ServerName awstats.openbuildinginstitute.org DocumentRoot "/var/www/html/certbot/htdocs" #Include /etc/httpd/conf.d/ssl.openbuildinginstitute.org <LocationMatch .*\.(svn|git|hg|bzr|cvs|ht)/.*> Deny From All </LocationMatch> </VirtualHost> <Directory "/var/www/html/certbot/htdocs"> Options +Indexes +Includes AllowOverride None Order allow,deny Allow from all # Harden vhost docroot by blocking all request types except the 3 essentials <LimitExcept GET POST HEAD> deny from all </LimitExcept> </Directory> # disable mod_security with rules as needed (found by logs in: # /var/log/httpd/modsec_audit.log <Location "/"> <IfModule security2_module> SecRuleEngine On #SecRuleRemoveById 960015 981173 960024 960904 960015 960017 </IfModule> </Location>
Thr Feb 08, 2018
- installed our minimal set of security wordpress plugins
[root@hetzner2 htdocs]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin install google-authenticator --activate ... Installing Google Authenticator (0.48) Downloading installation package from https://downloads.wordpress.org/plugin/google-authenticator.0.48.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/google-authenticator-0.48.zip'... Unpacking the package... Installing the plugin... Plugin installed successfully. Activating 'google-authenticator'... Plugin 'google-authenticator' activated. Success: Installed 1 of 1 plugins. [root@hetzner2 htdocs]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin install google-authenticator-encourage-user-activation --activate ... Installing Google Authenticator Encourage User Activation (0.2) Downloading installation package from https://downloads.wordpress.org/plugin/google-authenticator-encourage-user-activation.0.2.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/google-authenticator-encourage-user-activation-0.2.zip'... Unpacking the package... Installing the plugin... Plugin installed successfully. Activating 'google-authenticator-encourage-user-activation'... Plugin 'google-authenticator-encourage-user-activation' activated. Success: Installed 1 of 1 plugins. [root@hetzner2 htdocs]# defaultOtpAccountDescription="`basename ${vhostDir_hetzner2}` wp" [root@hetzner2 htdocs]# pushd ${docrootDir_hetzner2}/wp-content/plugins/google-authenticator /var/www/html/www.opensourceecology.org/htdocs/wp-content/plugins/google-authenticator /var/www/html/www.opensourceecology.org/htdocs [root@hetzner2 google-authenticator]# sed -i "s^\$GA_description\s=\s(\s[\"'].*[\"']^\$GA_description = ( '$defaultOtpAccountDescription'^" google-authenticator.php [root@hetzner2 google-authenticator]# popd /var/www/html/www.opensourceecology.org/htdocs [root@hetzner2 htdocs]# # install 'force-strong-passwords' plugin [root@hetzner2 htdocs]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin install force-strong-passwords --activate ... Installing Force Strong Passwords (1.8.0) Downloading installation package from https://downloads.wordpress.org/plugin/force-strong-passwords.1.8.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/force-strong-passwords-1.8.0.zip'... Unpacking the package... Installing the plugin... Plugin installed successfully. Activating 'force-strong-passwords'... Plugin 'force-strong-passwords' activated. Success: Installed 1 of 1 plugins. [root@hetzner2 htdocs]# [root@hetzner2 htdocs]# # install rename-wp-login plugin [root@hetzner2 htdocs]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin install rename-wp-login --activate ... Installing Rename wp-login.php (2.5.5) Downloading installation package from https://downloads.wordpress.org/plugin/rename-wp-login.2.5.5.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/rename-wp-login-2.5.5.zip'... Unpacking the package... Installing the plugin... Plugin installed successfully. Activating 'rename-wp-login'... Plugin 'rename-wp-login' activated. Success: Installed 1 of 1 plugins. [root@hetzner2 htdocs]# [root@hetzner2 htdocs]# # install "SSL Insecure Content Fixer" pugin [root@hetzner2 htdocs]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin install ssl-insecure-content-fixer --activate ... Installing SSL Insecure Content Fixer (2.5.0) Downloading installation package from https://downloads.wordpress.org/plugin/ssl-insecure-content-fixer.2.5.0.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/ssl-insecure-content-fixer-2.5.0.zip'... Unpacking the package... Installing the plugin... Plugin installed successfully. Activating 'ssl-insecure-content-fixer'... Plugin 'ssl-insecure-content-fixer' activated. Success: Installed 1 of 1 plugins. [root@hetzner2 htdocs]# [root@hetzner2 htdocs]# # install "Varnish Caching" pugin [root@hetzner2 htdocs]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin install vcaching --activate ... Installing Varnish Caching (1.6.7) Downloading installation package from https://downloads.wordpress.org/plugin/vcaching.1.6.7.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/vcaching-1.6.7.zip'... Unpacking the package... Installing the plugin... Plugin installed successfully. Activating 'vcaching'... Plugin 'vcaching' activated. Success: Installed 1 of 1 plugins.
- updated existing wordpress plugins on osemain staging site on hetzner2
[root@hetzner2 htdocs]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin update --all ... +----------------------------+-------------+-------------+---------+ | name | old_version | new_version | status | +----------------------------+-------------+-------------+---------+ | akismet | 3.3 | 4.0.2 | Updated | | cyclone-slider-2 | 2.12.4 | 3.2.0 | Updated | | duplicate-post | 3.1.2 | 3.2.1 | Updated | | insert-headers-and-footers | 1.4.1 | 1.4.2 | Updated | | jetpack | 4.7.1 | 5.8 | Updated | | ml-slider | 3.5 | 3.6.8 | Updated | | post-types-order | 1.9.3 | 1.9.3.6 | Updated | | recent-tweets-widget | 1.6.6 | 1.6.8 | Updated | | shareaholic | 7.8.0.4 | 8.6.5 | Updated | | share-on-diaspora | 0.7.1 | 0.7.2 | Updated | | w3-total-cache | 0.9.5.2 | 0.9.6 | Updated | | wp-smushit | 2.6.1 | 2.7.6 | Updated | +----------------------------+-------------+-------------+---------+ Success: Updated 12 of 12 plugins. [root@hetzner2 htdocs]#
- updated all the wp dashboard config settings for the new plugins
- discovered that the "Purge ALL Varnish Cache" button resulted in an error
Trying to purge URL : http://127.0.0.1:6081/.* => 403 Forbidden
- there's a cooresponding ossec alert & trigger by modsec:
--388c9d2b-A-- [09/Feb/2018:00:38:31 +0000] WnzthxK3iH4mmnFop@Wm5gAAAAc 127.0.0.1 37192 127.0.0.1 8000 --388c9d2b-B-- PURGE /.* HTTP/1.1 User-Agent: WordPress/4.9.3; https://osemain.opensourceecology.org Accept-Encoding: deflate;q=1.0, compress;q=0.5, gzip;q=0.5 host: osemain.opensourceecology.org X-VC-Purge-Method: regex X-VC-Purge-Host: osemain.opensourceecology.org Connection: Close, close X-Forwarded-For: 127.0.0.1 X-Varnish: 45279 --388c9d2b-F-- HTTP/1.1 403 Forbidden Content-Length: 204 Connection: close Content-Type: text/html; charset=iso-8859-1 --388c9d2b-E-- --388c9d2b-H-- Message: Access denied with code 403 (phase 1). Match of "within %{tx.allowed_methods}" against "REQUEST_METHOD" required. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_30_http_policy.conf"] [line "31"] [id "960032"] [rev "2"] [msg "Method is not allowed by policy"] [data "PURGE"] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "9"] [tag "OWASP_CRS/POLICY/METHOD_NOT_ALLOWED"] [tag "WASCTC/WASC-15"] [tag "OWASP_TOP_10/A6"] [tag "OWASP_AppSensor/RE1"] [tag "PCI/12.1"] Action: Intercepted (phase 1) Stopwatch: 1518136711479621 232 (- - -) Stopwatch2: 1518136711479621 232; combined=97, p1=83, p2=0, p3=0, p4=0, p5=14, sr=18, sw=0, l=0, gc=0 Response-Body-Transformed: Dechunked Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9. Server: Apache Engine-Mode: "ENABLED" --388c9d2b-Z--
- added id = 960032 to the httpd config whitelist
- varnish log alerts showed that the purge request was being sent to obi, so I added it to that apache config too
- confirmed this works in seedhome.openbuildinginstitue.org
- confirmed this works in fef.opensourceecology.org
- ah, I think this fails because the varnish config is defined to use 'www.opensourceecology.org' it's temporarily 'osemain.opensourceecology.org' due to a name collision with the prod site. I'm confident this will work once the name is correct after cuttover. ignoring this issue. in the meantime, I'm pretty sure nothing is being cached at all (which is actually nice for testing the staging site).
- the top-left logo shows up fine on my disposable vm, but not in my locked-down https-everywhere browser
- should probably change the url to use https instead in Appearance -> Theme Options -> Site Logo to use https instead (note I can't test this until after the name change)
- discovered that the facebook iframe on the main page was missing
- solution was to change 'http' to 'https'
[segment class="first-segment"] <div class="center"><iframe style="border: none; overflow: hidden; width: 820px; height: 435px; background: white; float: center;" src="https://www.facebook.com/plugins/likebox.php?href=http%3A%2F%2Fwww.facebook.com%2FOpenSourceEcology&width=820&colorscheme=light&show_faces=true&border_color&stream=true&header=true&height=435" width="300" height="150" frameborder="0" scrolling="yes"></iframe></div> [/segment]
- confirmed that the old broken pages are still working
- /contributors/
- the top slider that Catarina said we've permenantly lost is still throwing an error; Marcin will remove this from the prod site, so this will be a non-issue at cutover
- the timeline shows up properly
- /community-true-fans/
- this page perfectly matches prod
- /history-timeline/
- this page perfectly matches prod
- /cnc-torch-table-workshop/
- Eventbright on left still working as desired
- videos embed issues are still present as expected, and a non-blocker to the migration
- /about-videos-3/
- missing video content is still (not) present as expected, and a non-blocker to the migration
- /contributors/
- all looks good to me. sent an email to Marcin asking for validation.
- awstats were not updating again
- found a permissions issue
[root@hetzner2 conf.d]# ls -lah /var/log/nginx/ total 11M drwx------ 9 nginx nginx 4.0K Feb 8 07:21 . drwxr-xr-x 13 root root 12K Feb 8 07:21 .. -rw-r--r-- 1 nginx nginx 3.5M Feb 9 02:03 access.log -rw-r--r-- 1 nginx nginx 222 Dec 24 01:33 access.log.1.gz -rw-r--r-- 1 nginx nginx 143K Jan 30 05:02 access.log-20180130.gz -rw-r--r-- 1 nginx nginx 154K Jan 31 07:43 access.log-20180131.gz -rw-r--r-- 1 nginx nginx 186K Feb 1 06:41 access.log-20180201.gz -rw-r--r-- 1 nginx nginx 107K Feb 2 08:54 access.log-20180202.gz -rw-r--r-- 1 nginx nginx 83K Feb 3 05:49 access.log-20180203.gz -rw-r--r-- 1 nginx nginx 81K Feb 4 04:48 access.log-20180204.gz -rw-r--r-- 1 nginx nginx 265K Feb 5 09:12 access.log-20180205.gz -rw-r--r-- 1 nginx nginx 351K Feb 6 06:00 access.log-20180206.gz -rw-r--r-- 1 nginx nginx 458K Feb 7 07:35 access.log-20180207.gz -rw-r--r-- 1 nginx nginx 4.8M Feb 8 07:21 access.log-20180208 -rw-r--r-- 1 nginx nginx 420 Dec 24 01:32 access.log.2.gz -rw-r--r-- 1 nginx nginx 207 Dec 24 01:29 access.log.3.gz -rw-r--r-- 1 nginx nginx 187 Dec 24 01:28 access.log.4.gz -rw-r--r-- 1 nginx nginx 1.2K Dec 24 01:28 access.log.5.gz -rw-r--r-- 1 nginx nginx 271 Dec 24 01:21 access.log.6.gz -rw-r--r-- 1 nginx nginx 323K Feb 9 02:02 error.log -rw-r--r-- 1 nginx nginx 211 Dec 24 01:33 error.log.1.gz -rw-r--r-- 1 nginx nginx 7.8K Jan 30 05:00 error.log-20180130.gz -rw-r--r-- 1 nginx nginx 9.6K Jan 31 07:42 error.log-20180131.gz -rw-r--r-- 1 nginx nginx 9.0K Feb 1 06:41 error.log-20180201.gz -rw-r--r-- 1 nginx nginx 7.8K Feb 2 08:52 error.log-20180202.gz -rw-r--r-- 1 nginx nginx 6.0K Feb 3 05:49 error.log-20180203.gz -rw-r--r-- 1 nginx nginx 6.2K Feb 4 04:48 error.log-20180204.gz -rw-r--r-- 1 nginx nginx 8.5K Feb 5 09:03 error.log-20180205.gz -rw-r--r-- 1 nginx nginx 9.6K Feb 6 06:00 error.log-20180206.gz -rw-r--r-- 1 nginx nginx 13K Feb 7 07:35 error.log-20180207.gz -rw-r--r-- 1 nginx nginx 433K Feb 8 07:21 error.log-20180208 -rw-r--r-- 1 nginx nginx 592 Dec 24 01:33 error.log.2.gz -rw-r--r-- 1 nginx nginx 413 Dec 24 01:29 error.log.3.gz -rw-r--r-- 1 nginx nginx 210 Dec 24 01:28 error.log.4.gz -rw-r--r-- 1 nginx nginx 211 Dec 24 01:27 error.log.5.gz -rw-r--r-- 1 nginx nginx 506 Dec 24 01:24 error.log.6.gz drwxr-xr-x 2 root root 4.0K Feb 8 07:21 fef.opensourceecology.org drwxr-xr-x 2 root root 4.0K Feb 8 07:21 forum.opensourceecology.org drwxr-xr-x 2 root root 4.0K Feb 8 07:21 oswh.opensourceecology.org drwxr-xr-x 2 root root 4.0K Feb 8 07:21 seedhome.openbuildinginstitute.org drwxr-xr-x 2 root root 4.0K Feb 8 07:21 wiki.opensourceecology.org drw-r--r-- 2 nginx nginx 4.0K Feb 9 02:01 www.openbuildinginstitute.org drwxr-xr-x 2 root root 4.0K Feb 8 07:21 www.opensourceecology.org [root@hetzner2 conf.d]#
- those probably should all be owned by nginx
[root@hetzner2 nginx]# chown nginx:nginx fef.opensourceecology.org [root@hetzner2 nginx]# chown -R nginx:nginx fef.opensourceecology.org [root@hetzner2 nginx]# chown -R nginx:nginx forum.opensourceecology.org/ [root@hetzner2 nginx]# chown -R nginx:nginx oswh.opensourceecology.org/ [root@hetzner2 nginx]# chown -R nginx:nginx seedhome.openbuildinginstitute.org/ [root@hetzner2 nginx]# chown -R nginx:nginx wiki.opensourceecology.org/ [root@hetzner2 nginx]# chown -R nginx:nginx www.opensourceecology.org/ [root@hetzner2 nginx]# chown -R nginx:nginx www.openbuildinginstitute.org/ [root@hetzner2 nginx]# ls -lah total 11M drwx------ 9 nginx nginx 4.0K Feb 8 07:21 . drwxr-xr-x 13 root root 12K Feb 8 07:21 .. -rw-r--r-- 1 nginx nginx 3.5M Feb 9 02:06 access.log -rw-r--r-- 1 nginx nginx 222 Dec 24 01:33 access.log.1.gz -rw-r--r-- 1 nginx nginx 143K Jan 30 05:02 access.log-20180130.gz -rw-r--r-- 1 nginx nginx 154K Jan 31 07:43 access.log-20180131.gz -rw-r--r-- 1 nginx nginx 186K Feb 1 06:41 access.log-20180201.gz -rw-r--r-- 1 nginx nginx 107K Feb 2 08:54 access.log-20180202.gz -rw-r--r-- 1 nginx nginx 83K Feb 3 05:49 access.log-20180203.gz -rw-r--r-- 1 nginx nginx 81K Feb 4 04:48 access.log-20180204.gz -rw-r--r-- 1 nginx nginx 265K Feb 5 09:12 access.log-20180205.gz -rw-r--r-- 1 nginx nginx 351K Feb 6 06:00 access.log-20180206.gz -rw-r--r-- 1 nginx nginx 458K Feb 7 07:35 access.log-20180207.gz -rw-r--r-- 1 nginx nginx 4.8M Feb 8 07:21 access.log-20180208 -rw-r--r-- 1 nginx nginx 420 Dec 24 01:32 access.log.2.gz -rw-r--r-- 1 nginx nginx 207 Dec 24 01:29 access.log.3.gz -rw-r--r-- 1 nginx nginx 187 Dec 24 01:28 access.log.4.gz -rw-r--r-- 1 nginx nginx 1.2K Dec 24 01:28 access.log.5.gz -rw-r--r-- 1 nginx nginx 271 Dec 24 01:21 access.log.6.gz -rw-r--r-- 1 nginx nginx 324K Feb 9 02:06 error.log -rw-r--r-- 1 nginx nginx 211 Dec 24 01:33 error.log.1.gz -rw-r--r-- 1 nginx nginx 7.8K Jan 30 05:00 error.log-20180130.gz -rw-r--r-- 1 nginx nginx 9.6K Jan 31 07:42 error.log-20180131.gz -rw-r--r-- 1 nginx nginx 9.0K Feb 1 06:41 error.log-20180201.gz -rw-r--r-- 1 nginx nginx 7.8K Feb 2 08:52 error.log-20180202.gz -rw-r--r-- 1 nginx nginx 6.0K Feb 3 05:49 error.log-20180203.gz -rw-r--r-- 1 nginx nginx 6.2K Feb 4 04:48 error.log-20180204.gz -rw-r--r-- 1 nginx nginx 8.5K Feb 5 09:03 error.log-20180205.gz -rw-r--r-- 1 nginx nginx 9.6K Feb 6 06:00 error.log-20180206.gz -rw-r--r-- 1 nginx nginx 13K Feb 7 07:35 error.log-20180207.gz -rw-r--r-- 1 nginx nginx 433K Feb 8 07:21 error.log-20180208 -rw-r--r-- 1 nginx nginx 592 Dec 24 01:33 error.log.2.gz -rw-r--r-- 1 nginx nginx 413 Dec 24 01:29 error.log.3.gz -rw-r--r-- 1 nginx nginx 210 Dec 24 01:28 error.log.4.gz -rw-r--r-- 1 nginx nginx 211 Dec 24 01:27 error.log.5.gz -rw-r--r-- 1 nginx nginx 506 Dec 24 01:24 error.log.6.gz drwxr-xr-x 2 nginx nginx 4.0K Feb 8 07:21 fef.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 8 07:21 forum.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 8 07:21 oswh.opensourceecology.org drwxr-xr-x 2 nginx nginx 4.0K Feb 8 07:21 seedhome.openbuildinginstitute.org drwxr-xr-x 2 nginx nginx 4.0K Feb 8 07:21 wiki.opensourceecology.org drw-r--r-- 2 nginx nginx 4.0K Feb 9 02:01 www.openbuildinginstitute.org drwxr-xr-x 2 nginx nginx 4.0K Feb 8 07:21 www.opensourceecology.org [root@hetzner2 nginx]#
- that appears to be working. Now I just have to wait another week or so..
- added an 'awstats' record to opensourceecology.org in cloudflare.com
Wed Feb 07, 2018
- Marcin responded to the osemain issues. Here's the status:
- /
- Marcin confirmed that front-page content (the video) this was fixed
- /contributors/
- Catarina will be looking into the "featured collaborators" slider issue on this page which is broken on production & staging
- I'll look into the "collaborators timeline" issue that is fine on prod, but missing on staging
- Marcin said the missing photos on the "collaborators timeline" should be dealt with later & not block this migration
- /community/
- Marcin updated the site, removing the broken Flattr embed http://opensourceecology.org/community/
- /community-true-fans/
- I need to investigate why these 2x sliders are broken in staging, but working in production
- /history-timeline/
- I need to investigate why these 2x sliders are broken in staging, but working in production
- /
/cnc-torch-table-workshop/
- Eventbright is truncated on the left
- The first youtube video shows up as a link, instead of as an embed = https://www.youtube.com/watch?v=JOH03vzDYQg
- The first vemo video shows up as a link, instead of as an embed = https://vimeo.com/23785186
/about-videos-3/
- Marcin said the videos are missing. I asked him to attempt to add them & if he encountered any issues, send me the errors so I can try to reproduce & fix it.
- began debugging osemain staging slider issues on /contributors/
- first, none of the plugins were enabled. This was a permission issue, solved by following the guide at Wordpress
vhostDir="/var/www/html/www.opensourceecology.org" wpDocroot="${vhostDir}/htdocs" chown -R apache:apache "${vhostDir}" find "${vhostDir}" -type d -exec chmod 0750 {} \; find "${vhostDir}" -type f -exec chmod 0640 {} \; find "${wpDocroot}/wp-content" -type f -exec chmod 0660 {} \; find "${wpDocroot}/wp-content" -type d -exec chmod 0770 {} \; chown apache:apache-admins "${vhostDir}/wp-config.php" chmod 0440 "${vhostDir}/wp-config.php"
- next "revolution slider" wasn't enabled not sure why. I enabled it and the one named "Revolution Slider Patch"
- finally, I saw that the last slider wasn't actually a slider; it was just an iframe to an http site = http://cdn.knightlab.com/libs/timeline/latest/embed/index.html?source=0ArpE5Y9PpJCXdFhleTktTkoyeHNFUXktUXJCMGVkbVE&font=Bevan-PotanoSans&maptype=toner&lang=en&start_at_end=true&hash_bookmark=true&height=650#16
- replacing the above iframe's http protocol with https fixed the issue, but first I had to whitelist some modsec triggers to be able to update the page
- 958057, XSS
- 958056, XSS
- 973327, XSS
- began debugging issues on /community-true-fans/
- same issue, there was an iframe linking to an http page on cdn.knightlab.com; changing to https fixed it https://cdn.knightlab.com/libs/timeline/latest/embed/index.html?source=0ArpE5Y9PpJCXdEhYb1NkR1dGRnhPV2FGYndseUg2WkE&font=Merriweather-NewsCycle&maptype=toner&lang=en&start_at_end=true&start_at_slide=34&height=650
- began debugging issues on /history-timeline/
- same issue, there was an iframe linking to an http page on cdn.knightlab.com; changing to https fixed it https://cdn.knightlab.com/libs/timeline/latest/embed/index.html?source=0AkNG-lv1ELQvdEMxdzRnU2VFbUllZ2Y0cnZPRld3SXc&font=Bevan-PotanoSans&maptype=toner&lang=en&hash_bookmark=true&start_at_slide=23&height=650
- began troubleshooting /cnc-torch-table-workshop/
- found a section on the page edit called "Choose Custom Sidebar" and a drop-down for "Primary Sidebar Choice" said "CNC Torch Table Workshop". So I opened Appearence -> Widgets in a new tab, and found the "CNC Torcn Table Workshop" widget with this contents:
<a href="http://opensourceecology.org/CNC-torch-table-workshop/#overview">Overview</a> <a href="http://opensourceecology.org/CNC-torch-table-workshop/#learning">Learning Outcomes</a> <a href="http://opensourceecology.org/CNC-torch-table-workshop/#schedule">Schedule</a> <a href="http://opensourceecology.org/CNC-torch-table-workshop#registration">Registration</a> <div style="width:195px; text-align:center;" ><iframe src="https://www.eventbrite.com/countdown-widget?eid=38201763503" frameborder="0" height="322" width="195" marginheight="0" marginwidth="0" scrolling="no" allowtransparency="true"></iframe><div style="font-family:Helvetica, Arial; font-size:12px; padding:10px 0 5px; margin:2px; width:195px; text-align:center;" ><a class="powered-by-eb" style="color: #ADB0B6; text-decoration: none;" target="_blank" href="http://www.eventbrite.com/">Powered by Eventbrite</a></div></div>
- I changed the height of "322" of the iframe to "999", pressed save, and cleared the varnish cache. That didn't help
- I added ";height:999px" to the containing div, pressed save, and cleared the varnish cache.
- I replaced the "CNC Torch Table Workshop" widget of type "Text" with one of the same contents of type "Custom HTML", and it worked.
- the "Custom HTML" widget was added in wordpress v4.8.1 https://wptavern.com/wordpress-4-8-1-adds-a-dedicated-custom-html-widget
- this makes sense, as this migration includes the update of core wp from v4.7.9 to v4.9.2
- I had to add some breaks to make the links on separate lines; I guess that's what "Text" does for you--and also breaks dynamic height? Here's the new contents:
- ugh, fucking hell. I can't copy from the new "Custom HTML widget?" It adds line numbers to my clipboard, breaking copy-and-paste to/from the window. This is horrible!
- I tried switching back to "text" so I could just try to uncheck the "automatically add paragraphs" checkbox, but that's only a "legacy" thing (as the link above points-out. Once you delete a text box & re-add it, that checkbox disappears :(
- it looks like this new "custom html" widget is powered by codemirror
- ok, I found the solution to the annoying codemirror issues, but it can't be fixed site-wide. It must be fixed on a per-user basis by going to the user's profile & checking the "Disable syntax highlighting when editing code" checkbox https://make.wordpress.org/core/tag/codemirror/
- ok, here's the new contents with line breaks:
- the "Custom HTML" widget was added in wordpress v4.8.1 https://wptavern.com/wordpress-4-8-1-adds-a-dedicated-custom-html-widget
<a href="http://opensourceecology.org/CNC-torch-table-workshop/#overview">Overview</a> <br/> <a href="http://opensourceecology.org/CNC-torch-table-workshop/#learning">Learning Outcomes</a> <br/> <a href="http://opensourceecology.org/CNC-torch-table-workshop/#schedule">Schedule</a> </> <a href="http://opensourceecology.org/CNC-torch-table-workshop#registration">Registration</a><div style="width:195px; text-align:center;" > <br/> <div style="width:195px; text-align:center;" ><iframe src="https://www.eventbrite.com/countdown-widget?eid=38201763503" frameborder="0" height="322" width="195" marginheight="0" marginwidth="0" scrolling="no" allowtransparency="true"></iframe><div style="font-family:Helvetica, Arial; font-size:12px; padding:10px 0 5px; margin:2px; width:195px; text-align:center;" ><a class="powered-by-eb" style="color: #ADB0B6; text-decoration: none;" target="_blank" href="http://www.eventbrite.com/">Powered by Eventbrite</a></div></div>
- I emailed marcin; he said that the issue embedding the 2x videos on this page shouldn't be a blocker & confirmed that I should proceed with the migration with the content missing
- began debugging the missing videos on /about-video-3/
- there's literally nothing in the textarea for this page in wordpress. It's empty for both staging & production *shrug*
- I emailed marcin; he said this shouldn't be a blocker & confirmed that I should proceed with the migration with the content missing
- So the osemain issues that remain to be fixed are:
- /contributors/
- Catarina will be looking into the "featured collaborators" slider issue on this page which is broken on production & staging
- /contributors/
/cnc-torch-table-workshop/
Tue Feb 06, 2018
- the osemain staging site was created on 2018-01-03
[maltfield@hetzner2 backups_for_migration_from_hetzner1]$ date Tue Feb 6 15:53:00 UTC 2018 [maltfield@hetzner2 backups_for_migration_from_hetzner1]$ pwd /var/tmp/backups_for_migration_from_hetzner1 [maltfield@hetzner2 backups_for_migration_from_hetzner1]$ ls -lah | grep -i osemain drwxr-xr-x 4 root root 4.0K Jan 3 21:32 osemain_20180103 [maltfield@hetzner2 backups_for_migration_from_hetzner1]$
- I refreshed it for today = 2018-02-06
# DECLARE VARIABLES source /root/backups/backup.settings #stamp=`date +%Y%m%d` stamp="20180206" backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/osemain_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/osemain_${stamp}" backupFileName_db_hetzner1="mysqldump_osemain.${stamp}.sql.bz2" backupFileName_files_hetzner1="osemain_files.${stamp}.tar.gz" dbName_hetzner1='ose_osemain' dbName_hetzner2='osemain_db' dbUser_hetzner2="osemain_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/www.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs"
- I updated the osemain site with a new wp core, but stopped short of installing our new security wp plugins or attempting to update the existing wp plugins. Here's a breakdown of what I found with the site:
- /
- vid fixed w/ update
- /contributors/
- revolution slider is broken on the production site with error message = "Revolution Slider Error: Slider with alias collabslider not found."
- collaborators timeline is missing entirely on the staging site
- many photos 404ing for the collaborators timeline on the production site
- /community/
- /
/community-true-fans/
- 2x sliders missing on staging; working fine on production
/history-timeline/
- 2x sliders missing on staging; working fine on production
/cnc-torch-table-workshop/
- Eventbright is truncated on the left
- The first youtube video shows up as a link, instead of as an embed = https://www.youtube.com/watch?v=JOH03vzDYQg
- The first vemo video shows up as a link, instead of as an embed = https://vimeo.com/23785186
/about-videos-3/
- Marcin said the videos are missing. I asked him to attempt to add them & if he encountered any issues, send me the errors so I can try to reproduce & fix it.
Mon Feb 05, 2018
- DIdn't hear back from Marcin wrt my email about the osemigration, so I decided to delay the migration CHG-2018-02-05
Sun Feb 04, 2018
- Marcin q confirmed that we can migrate the osemain wp site on Monday, Feb 5th.
- I began documenting this CHG process at CHG-2018-02-05
- I found that many of the plugins 3 of the active plugins had pending updates: fundraising, wpmdev-updates, and wp-smush-pro
[root@hetzner2 ~]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin list PHP Warning: ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44 PHP Warning: file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/usr/share/php/Composer) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454 +---------------------------------------------+----------+-----------+---------+ | name | status | update | version | +---------------------------------------------+----------+-----------+---------+ | akismet | active | none | 3.3 | | brankic-photostream-widget | active | none | 1.3.1 | | cyclone-slider-2 | inactive | none | 2.12.4 | | duplicate-post | active | none | 3.1.2 | | flattr | inactive | none | 1.2.2 | | force-strong-passwords | active | none | 1.8.0 | | fundraising | active | available | 2.6.4.5 | | google-authenticator | active | none | 0.48 | | google-authenticator-encourage-user-activat | active | none | 0.2 | | ion | | | | | hello | inactive | none | 1.6 | | insert-headers-and-footers | inactive | none | 1.4.1 | | jetpack | active | none | 4.7.1 | | ml-slider | active | none | 3.5 | | ml-slider-pro | active | none | 2.6.6 | | open-in-new-window-plugin | inactive | none | 2.4 | | patch-for-revolution-slider | active | none | 2.4.1 | | post-types-order | inactive | none | 1.9.3 | | really-simple-facebook-twitter-share-button | inactive | none | 4.5 | | s | | | | | recent-tweets-widget | active | none | 1.6.6 | | rename-wp-login | active | none | 2.5.5 | | revision-control | inactive | none | 2.3.2 | | revslider | active | none | 4.3.8 | | shareaholic | inactive | none | 7.8.0.4 | | share-on-diaspora | inactive | none | 0.7.1 | | ssl-insecure-content-fixer | active | none | 2.5.0 | | vcaching | active | none | 1.6.7 | | w3-total-cache | inactive | none | 0.9.5.2 | | wp-memory-usage | inactive | none | 1.2.2 | | wp-optimize | inactive | none | 2.1.1 | | wp-facebook-open-graph-protocol | inactive | none | 2.0.13 | | wpmudev-updates | active | available | 4.2 | | wp-smushit | inactive | none | 2.6.1 | | wp-smush-pro | active | available | 2.4.5 | +---------------------------------------------+----------+-----------+---------+ [WPMUDEV API Error] 4.2 | User has blocked requests through HTTP. ((unknown URL) [500]) [root@hetzner2 ~]#
- that update attempt failed
[root@hetzner2 ~]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin update --all PHP Warning: ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44 PHP Warning: file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/usr/share/php/Composer) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454 Enabling Maintenance mode... Warning: Update package not available. Warning: Update package not available. Warning: Update package not available. +-----------------+-------------+-------------+--------+ | name | old_version | new_version | status | +-----------------+-------------+-------------+--------+ | fundraising | 2.6.4.5 | 2.6.4.9 | Error | | wpmudev-updates | 4.2 | 4.4 | Error | | wp-smush-pro | 2.4.5 | 2.7.6 | Error | +-----------------+-------------+-------------+--------+ Success: Plugins already updated. [root@hetzner2 ~]#
- after commenting-out the WP_HTTP_BLOCK_EXTERNAL set to 'true' in wp-config.php, however, I see there were many more updates available
[root@hetzner2 ~]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin list PHP Warning: ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44 PHP Warning: file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/usr/share/php/Composer) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454 +---------------------------------------------+----------+-----------+---------+ | name | status | update | version | +---------------------------------------------+----------+-----------+---------+ | akismet | active | available | 3.3 | | brankic-photostream-widget | active | none | 1.3.1 | | cyclone-slider-2 | inactive | available | 2.12.4 | | duplicate-post | active | available | 3.1.2 | | flattr | inactive | none | 1.2.2 | | force-strong-passwords | active | none | 1.8.0 | | fundraising | active | available | 2.6.4.5 | | google-authenticator | active | none | 0.48 | | google-authenticator-encourage-user-activat | active | none | 0.2 | | ion | | | | | hello | inactive | none | 1.6 | | insert-headers-and-footers | inactive | available | 1.4.1 | | jetpack | active | available | 4.7.1 | | ml-slider | active | available | 3.5 | | ml-slider-pro | active | none | 2.6.6 | | open-in-new-window-plugin | inactive | none | 2.4 | | patch-for-revolution-slider | active | none | 2.4.1 | | post-types-order | inactive | available | 1.9.3 | | really-simple-facebook-twitter-share-button | inactive | none | 4.5 | | s | | | | | recent-tweets-widget | active | available | 1.6.6 | | rename-wp-login | active | none | 2.5.5 | | revision-control | inactive | none | 2.3.2 | | revslider | active | none | 4.3.8 | | shareaholic | inactive | available | 7.8.0.4 | | share-on-diaspora | inactive | available | 0.7.1 | | ssl-insecure-content-fixer | active | none | 2.5.0 | | vcaching | active | none | 1.6.7 | | w3-total-cache | inactive | available | 0.9.5.2 | | wp-memory-usage | inactive | none | 1.2.2 | | wp-optimize | inactive | none | 2.1.1 | | wp-facebook-open-graph-protocol | inactive | none | 2.0.13 | | wpmudev-updates | active | available | 4.2 | | wp-smushit | inactive | available | 2.6.1 | | wp-smush-pro | active | available | 2.4.5 | +---------------------------------------------+----------+-----------+---------+ [root@hetzner2 ~]#
- then attempting updates again was partially successful
[root@hetzner2 ~]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin update --all PHP Warning: ini_set() has been disabled for security reasons in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils-wp.php on line 44 PHP Warning: file_exists(): open_basedir restriction in effect. File(man-params.mustache) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pea r:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www. opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/va r/www/html/wiki.opensourceecology.org/:/usr/share/php/Composer) in phar:///home/wp/.wp-cli/wp-cli.phar/php/utils.php on line 454 Enabling Maintenance mode... Downloading update from https://downloads.wordpress.org/plugin/akismet.4.0.2.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/akismet-4.0.2.zip'... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/cyclone-slider-2.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/cyclone-slider-2-3.2.0.zip'... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/duplicate-post.3.2.1.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Warning: Update package not available. Downloading update from https://downloads.wordpress.org/plugin/insert-headers-and-footers.1.4.2.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/jetpack.5.7.1.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/ml-slider.3.6.8.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from http://wp-updates.com/api/1/download/plugin/sXpq1rDdVNBB4r8Tm-EPBg/ml-slider-pro... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/post-types-order.1.9.3.6.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/recent-tweets-widget.1.6.8.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/shareaholic.8.6.1.zip... Using cached file '/home/wp/.wp-cli/cache/plugin/shareaholic-8.6.1.zip'... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/share-on-diaspora.0.7.2.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Downloading update from https://downloads.wordpress.org/plugin/w3-total-cache.0.9.6.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Warning: Update package not available. Downloading update from https://downloads.wordpress.org/plugin/wp-smushit.2.7.6.zip... Unpacking the update... Installing the latest version... Removing the old version of the plugin... Plugin updated successfully. Warning: Update package not available. +----------------------------+-------------+-------------+---------+ | name | old_version | new_version | status | +----------------------------+-------------+-------------+---------+ | akismet | 3.3 | 4.0.2 | Updated | | cyclone-slider-2 | 2.12.4 | 3.2.0 | Updated | | duplicate-post | 3.1.2 | 3.2.1 | Updated | | fundraising | 2.6.4.5 | 2.6.4.9 | Updated | | insert-headers-and-footers | 1.4.1 | 1.4.2 | Updated | | jetpack | 4.7.1 | 5.7.1 | Updated | | ml-slider | 3.5 | 3.6.8 | Updated | | ml-slider-pro | 2.6.6 | 2.7.1 | Updated | | post-types-order | 1.9.3 | 1.9.3.6 | Updated | | recent-tweets-widget | 1.6.6 | 1.6.8 | Updated | | shareaholic | 7.8.0.4 | 8.6.1 | Updated | | share-on-diaspora | 0.7.1 | 0.7.2 | Updated | | w3-total-cache | 0.9.5.2 | 0.9.6 | Updated | | wpmudev-updates | 4.2 | 4.4 | Updated | | wp-smushit | 2.6.1 | 2.7.6 | Updated | | wp-smush-pro | 2.4.5 | 2.7.7 | Updated | +----------------------------+-------------+-------------+---------+ Success: Updated 16 of 16 plugins. [root@hetzner2 ~]#
- what follows showed everything was updated, except the same 3 plugins that fail. So it's probably those specific plugins' fault. Let's move on..
[root@hetzner2 ~]# sudo -u wp -i wp --path=${docrootDir_hetzner2} plugin list ... +---------------------------------------------+----------+-----------+---------+ | name | status | update | version | +---------------------------------------------+----------+-----------+---------+ | akismet | active | none | 4.0.2 | | brankic-photostream-widget | active | none | 1.3.1 | | cyclone-slider-2 | inactive | none | 3.2.0 | | duplicate-post | active | none | 3.2.1 | | flattr | inactive | none | 1.2.2 | | force-strong-passwords | active | none | 1.8.0 | | fundraising | active | available | 2.6.4.5 | | google-authenticator | active | none | 0.48 | | google-authenticator-encourage-user-activat | active | none | 0.2 | | ion | | | | | hello | inactive | none | 1.6 | | insert-headers-and-footers | inactive | none | 1.4.2 | | jetpack | active | none | 5.7.1 | | ml-slider | active | none | 3.6.8 | | ml-slider-pro | active | none | 2.7.1 | | open-in-new-window-plugin | inactive | none | 2.4 | | patch-for-revolution-slider | active | none | 2.4.1 | | post-types-order | inactive | none | 1.9.3.6 | | really-simple-facebook-twitter-share-button | inactive | none | 4.5 | | s | | | | | recent-tweets-widget | active | none | 1.6.8 | | rename-wp-login | active | none | 2.5.5 | | revision-control | inactive | none | 2.3.2 | | revslider | active | none | 4.3.8 | | shareaholic | inactive | none | 8.6.1 | | share-on-diaspora | inactive | none | 0.7.2 | | ssl-insecure-content-fixer | active | none | 2.5.0 | | vcaching | active | none | 1.6.7 | | w3-total-cache | inactive | none | 0.9.6 | | wp-memory-usage | inactive | none | 1.2.2 | | wp-optimize | inactive | none | 2.1.1 | | wp-facebook-open-graph-protocol | inactive | none | 2.0.13 | | wpmudev-updates | active | available | 4.2 | | wp-smushit | inactive | none | 2.7.6 | | wp-smush-pro | active | available | 2.4.5 | +---------------------------------------------+----------+-----------+---------+ [root@hetzner2 ~]#
- Found a bunch of content issues on the site when doing a quick spot-check. Emailed Marcin for revalidation & a go/no-go decision for the cutover tomorrow
- error "Revolution Slider Error: Slider with alias collabslider not found."
* http://opensourceecology.org/contributors/ * https://osemain.opensourceecology.org/contributors/
- flattr embed is broken on donate section
* http://opensourceecology.org/community/ * https://osemain.opensourceecology.org/community/
- slider for True Fans isn't appearing
* http://opensourceecology.org/community-true-fans/ * https://osemain.opensourceecology.org/community-true-fans/
- added a CHG for the forum cutover CHG-2018-02-04
- followed the above guide. We're now storing the dynamic (php) vanilla files outside the docroot, next to the static content. Of course, the static content takes up more space, but it's not too bad.
[root@hetzner2 forum.opensourceecology.org]# date Sun Feb 4 18:13:00 UTC 2018 [root@hetzner2 forum.opensourceecology.org]# pwd /var/www/html/forum.opensourceecology.org [root@hetzner2 forum.opensourceecology.org]# du -sh * 2.7G htdocs 173M vanilla_docroot_backup.20180113 [root@hetzner2 forum.opensourceecology.org]#
- updated dns of forum.opensourceecology.org to point to hetzner2 instead of hetzner1
- deleted the CNAME entry for 'forum' pointing to dedi978.your-server.de
- added an A entry for 'forum' pointing to 138.201.84.243
- updated the name 'stagingforum' to 'forum' in /etc/httpd/conf.d/00-forum.opensourceecology.org.conf
- updated the name 'stagingforum' to 'forum' in /etc/nginx/conf.d/forum.opensourceecology.org.conf
- updated the opensourceecology.org certificate
[root@hetzner2 forum.opensourceecology.org]# certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d osemain.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d forum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org ... IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/opensourceecology.org/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/opensourceecology.org/privkey.pem Your cert will expire on 2018-05-05. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew" - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le [root@hetzner2 forum.opensourceecology.org]# /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* [root@hetzner2 forum.opensourceecology.org]# nginx -t && service nginx reload nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Redirecting to /bin/systemctl reload nginx.service [root@hetzner2 forum.opensourceecology.org]#
- Did an unsafe varnish cache clear
service varnish restart
- confirmed that the static-content site was loading on the new url https://forum.opensourceecology.org
- updated our wiki (where the forum links to for info on how to create an account) with a notice that the site was deprecated in 2018 Vanilla_Forums
- began to test sending our encrypted ossec alerts to multiple recipients
- discovered that our spf record is invalid. I looked up the proper way to do this on cloudflare https://support.cloudflare.com/hc/en-us/articles/200168626-How-do-I-add-a-SPF-record-
- I had "spf" in the "Name" column, but it should have been "@". I updated this, and then Cloudflare automatically replaced the "@" that I typed with "opensourceecology.org"
- after the change above, I got a change in the lookup tool; success! https://mxtoolbox.com/SuperTool.aspx?action=spf%3aopensourceecology.org&run=toolpage
- updated the sent_encrypted_alarm.sh script to support multiple recipients
[root@hetzner2 ossec]# date Mon Feb 5 01:39:24 UTC 2018 [root@hetzner2 ossec]# pwd /var/ossec [root@hetzner2 ossec]# cat sent_encrypted_alarm.sh #!/bin/bash # store the would-be plaintext email body plaintext=`/usr/bin/formail -I ""` # loop through all recipients & send them individually-encrypted mail recipients="michael@opensourceecology.org marcin@opensourceecology.org" for recipient in $(echo $recipients); do # leave nothing unencrypted, including the subject! echo "${plaintext}" | /usr/bin/gpg --homedir /var/ossec/.gnupg --trust-model always -ear "${recipient}" | /usr/bin/mail -r noreply@opensourceecology.org -s "" "${recipient}" done exit 0
- found & documented command to clear varnish cache without actually restarting the process Web_server_configuration
varnishadm 'ban req.url ~ "."'
Thr Feb 01, 2018
- confirmed that the forum wget retry worked
- rsync'd the wget into the document root, and a spot-checked showed that most pages worked
- saw an issue (Forbidden) with all the tagged pages, ie https://stagingforum.opensourceecology.org/discussions/tagged/water.html
[Thu Feb 01 17:29:02.262787 2018] [core:crit] [pid 30854] (9)Bad file descriptor: [client 127.0.0.1:45488] AH00529: /var/www/html/forum.opensourceecology.org/htdocs/discussions/tagged/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable and that '/var/www/html/forum.opensourceecology.org/htdocs/discussions/tagged/' is executable
- the issue appears to be with the directory named '.htaccess' for the tag of the same name
[root@hetzner2 htdocs]# ls -lah discussions/tagged/.htaccess total 64K drwxr-xr-x 2 root root 4.0K Feb 1 11:36 . drwxr-xr-x 573 root root 36K Feb 1 17:27 .. -rw-r--r-- 1 root root 1.7K Feb 1 10:33 feed.rss -rw-r--r-- 1 root root 17K Feb 1 11:36 p1.html [root@hetzner2 htdocs]#
- removing the above dir worked. The consequence is we will get a 404 for any links to this tag, but that's an acceptable loss IMO; te discussions themselves will still be accessible
- removed the forums dir from /etc/php.ini since we're now just static content
- restarted httpd
- confirmed that awstats has been updating, but--since today happens to be the first of the month--there's no data yet. I'll check again in a week or so.
Wed Jan 31, 2018
- moved the wget'd forums into the docroot & found a lot of Forbidden responses to my requests
- for some reason, the pages are named 'p1' instead of 'index.html' The "Forbidden" is caused by the server's refusal to do a directory listing.
- solution: add a symlink to "p1" from "index.html" in every dir with a "p1" but no "index.html"
[root@hetzner2 htdocs]# for p1 in $(find . -name p1); do dir=`dirname $p1`; pushd $dir; ln -s "p1" "index.html"; popd; done
- that helped, but there's still actually missing content (ie: https://stagingforum.opensourceecology.org/categories/other-languages/)
[root@hetzner2 htdocs]# ls -lah categories/other-languages/ total 12K drwxrwxrwx 2 root root 4.0K Jan 25 16:23 . drwxrwxrwx 55 root root 4.0K Jan 25 20:52 .. -rwxrwxrwx 1 root root 3.5K Jan 25 16:23 feed.rss [root@hetzner2 htdocs]#
- attempting again with a different command from https://www.linuxjournal.com/content/downloading-entire-web-site-wget
[root@hetzner2 oseforum.20180201]# date Thu Feb 1 03:39:34 UTC 2018 [root@hetzner2 oseforum.20180201]# pwd /var/tmp/deleteMeIn2019/oseforum.20180201 [root@hetzner2 oseforum.20180201]# time nice wget --recursive --no-clobber --page-requisites --html-extension --convert-links --domains "forum.opensourceecology.org" "http://forum.opensourceecology.org" <pre> ## if the above doesn't work, I'll try some of the options in the comments (ie: -m, --wait, --limit-rate, etc) =Tue Jan 30, 2018= # Trained Marcin on PGP =Fri Jan 26, 2018= # Emailed Marcin about setting up PGP # Confirmed that the wget of the forums finished <pre> [root@hetzner2 wget]# pwd /var/tmp/deleteMeIn2019/oseoforum.20180125/wget [root@hetzner2 ~]# wget -r -p -e robots=off http://forum.opensourceecology.org ... FINISHED --2018-01-26 00:53:29-- Total wall clock time: 8h 39m 3s Downloaded: 96387 files, 2.2G in 1m 33s (24.5 MB/s) [root@hetzner2 wget]# du -sh forum.opensourceecology.org/* 48K forum.opensourceecology.org/activity 264K forum.opensourceecology.org/applications 300K forum.opensourceecology.org/cache 3.0M forum.opensourceecology.org/categories 27M forum.opensourceecology.org/dashboard 1.4G forum.opensourceecology.org/discussion 15M forum.opensourceecology.org/discussions 873M forum.opensourceecology.org/entry 148K forum.opensourceecology.org/index.html 744K forum.opensourceecology.org/plugins 72M forum.opensourceecology.org/profile 12K forum.opensourceecology.org/search 28K forum.opensourceecology.org/search?Search=#000000&Mode=like 28K forum.opensourceecology.org/search?Search=#0&Mode=like 16K forum.opensourceecology.org/search?Search=#13Lifetime&Mode=like 16K forum.opensourceecology.org/search?Search=#197187&Mode=like 32K forum.opensourceecology.org/search?Search=#1&Mode=like 28K forum.opensourceecology.org/search?Search=#2&Mode=like 16K forum.opensourceecology.org/search?Search=#363636&Mode=like 28K forum.opensourceecology.org/search?Search=#3&Mode=like 16K forum.opensourceecology.org/search?Search=#458&Mode=like 20K forum.opensourceecology.org/search?Search=#4&Mode=like 28K forum.opensourceecology.org/search?Search=#5&Mode=like 20K forum.opensourceecology.org/search?Search=#6&Mode=like 16K forum.opensourceecology.org/search?Search=#7&Mode=like 16K forum.opensourceecology.org/search?Search=#8-High&Mode=like 28K forum.opensourceecology.org/search?Search=#8&Mode=like 16K forum.opensourceecology.org/search?Search=#9-Industrial&Mode=like 16K forum.opensourceecology.org/search?Search=#9&Mode=like 16K forum.opensourceecology.org/search?Search=#Another&Mode=like 16K forum.opensourceecology.org/search?Search=#apollo&Mode=like 16K forum.opensourceecology.org/search?Search=#Cloud&Mode=like 16K forum.opensourceecology.org/search?Search=#Edit&Mode=like 16K forum.opensourceecology.org/search?Search=#Fabiofranca&Mode=like 16K forum.opensourceecology.org/search?Search=#freecad&Mode=like 16K forum.opensourceecology.org/search?Search=#HUBCAMP&Mode=like 16K forum.opensourceecology.org/search?Search=#openhardware&Mode=like 16K forum.opensourceecology.org/search?Search=#opensourceecology&Mode=like 16K forum.opensourceecology.org/search?Search=#qi-hardware&Mode=like 16K forum.opensourceecology.org/search?Search=#qihardware&Mode=like 16K forum.opensourceecology.org/search?Search=#REDIRECT&Mode=like 16K forum.opensourceecology.org/search?Search=#toc&Mode=like 16K forum.opensourceecology.org/search?Search=#toctitle&Mode=like 24K forum.opensourceecology.org/themes 25M forum.opensourceecology.org/uploads 764K forum.opensourceecology.org/vanilla [root@hetzner2 wget]#
Thr Jan 25, 2018
- Marcin said that the forums initial validation is complete
- discovered that there was an 'oseforum' directory inside the forums docroot that had a duplicate copy of all the files in the docroot.
- I moved this director to
- I began a quick investigation on the low-hanging-fruit to harden Vanilla
- I didn't find any good guides. Lots of discussion about permissions, though. https://open.vanillaforums.com/discussion/748/security
- the fucking installation guide tells us to make the conf directory inside the docroot *and* be writeable by the webserver. That's a foundational red-flag! https://github.com/vanilla/vanilla/blob/master/README.md
- probably the most important step to securing the forums would be updating the core Vanilla version
- The version of Vanilla Forums that we're running is Version 2.0.18.1
- The current version of Vanilla Forums is 2.5
- The number of plugins that we're using tells me that an update of the Vanilla Forums core code probably going to be extremely non-trivial
</pre> [root@hetzner2 htdocs]# date Thu Jan 25 13:54:16 UTC 2018 [root@hetzner2 htdocs]# pwd /var/www/html/forum.opensourceecology.org/htdocs [root@hetzner2 htdocs]# grep 'EnabledPlugins' conf/config.php // EnabledPlugins $Configuration['EnabledPlugins']['HtmLawed'] = 'HtmLawed'; $Configuration['EnabledPlugins']['embedvanilla'] = 'embedvanilla'; $Configuration['EnabledPlugins']['Tagging'] = 'Tagging'; $Configuration['EnabledPlugins']['Flagging'] = 'Flagging'; $Configuration['EnabledPlugins']['Liked'] = 'Liked'; $Configuration['EnabledPlugins']['OpenID'] = 'OpenID'; $Configuration['EnabledPlugins']['Twitter'] = 'Twitter'; $Configuration['EnabledPlugins']['Sitemaps'] = 'Sitemaps'; $Configuration['EnabledPlugins']['GoogleSignIn'] = 'GoogleSignIn'; $Configuration['EnabledPlugins']['Facebook'] = 'Facebook'; $Configuration['EnabledPlugins']['FileUpload'] = 'FileUpload'; $Configuration['EnabledPlugins']['VanillaStats'] = 'VanillaStats'; $Configuration['EnabledPlugins']['EMailSubscribe'] = 'EMailSubscribe'; $Configuration['EnabledPlugins']['Emotify'] = 'Emotify'; $Configuration['EnabledPlugins']['VanillaInThisDiscussion'] = 'VanillaInThisDiscussion'; $Configuration['EnabledPlugins']['ProxyConnect'] = 'ProxyConnect'; $Configuration['EnabledPlugins']['ProxyConnectManual'] = 'ProxyConnectManualPlugin'; $Configuration['EnabledPlugins']['cleditor'] = 'cleditor'; $Configuration['EnabledPlugins']['TinyMCE'] = 'TinyMCE'; $Configuration['EnabledPlugins']['Categories2Menu'] = TRUE; $Configuration['EnabledPlugins']['SplitMerge'] = TRUE; $Configuration['EnabledPlugins']['Voting'] = TRUE; $Configuration['EnabledPlugins']['FeedDiscussions'] = TRUE; $Configuration['EnabledPlugins']['StopForumSpam'] = TRUE; $Configuration['EnabledPlugins']['Minify'] = TRUE; $Configuration['EnabledPlugins']['Cleanser'] = TRUE; $Configuration['EnabledPlugins']['RegistrationRestrictLogger'] = TRUE; [root@hetzner2 htdocs]# </pre>
- I found no obvious way to make the forums read-only from the application-side
- except an old extension that hasn't been updated since 2005 https://open.vanillaforums.com/addon/readonly-plugin
- of course, we could achieve read-only by:
- revoking all access privlidges to the db user = ${dbUser_hetzner2} = 'oseforums_user' && granting them SELECT only access
- And making all of the files/directories read-only
# DECLARE VARIABLES source /root/backups/backup.settings stamp="20180113" backupDir_hetzner1="/usr/home/oseforum/tmp/backups_for_migration_to_hetzner2/oseforum_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/oseforum_${stamp}" backupFileName_db_hetzner1="mysqldump_oseforum.${stamp}.sql.bz2" backupFileName_files_hetzner1="oseforum_files.${stamp}.tar.gz" dbName_hetzner1='oseforum' dbName_hetzner2='oseforum_db' dbUser_hetzner2="oseforum_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/forum.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs" time nice mysql -uroot -p${mysqlPass} -sNe "REVOKE ALL ON ${dbName_hetzner2}.* FROM '${dbUser_hetzner2}'@'localhost'; FLUSH PRIVILEGES;" time nice mysql -uroot -p${mysqlPass} -sNe "GRANT SELECT ON ${dbName_hetzner2}.* TO '${dbUser_hetzner2}'@'localhost'; FLUSH PRIVILEGES;" # set permissions chown -R apache:apache "${vhostDir_hetzner2}" find "${vhostDir_hetzner2}" -type d -exec chmod 0550 {} \; find "${vhostDir_hetzner2}" -type f -exec chmod 0440 {} \; chown apache:apache-admins "${docrootDir_hetzner2}/conf/config.php" chmod 0440 "${docrootDir_hetzner2}/conf/config.php"
- ran the above commands & confirmed that the permissions changed to SELECT-only in the db
[root@hetzner2 htdocs]# nice mysql -uroot -p${mysqlPass} mysql -sNe "select * from db where Db = 'oseforum_db';" localhost oseforum_db oseforum_user Y N N N N N N N N N N N N N N N N N N [root@hetzner2 htdocs]#
- attempted to login, but then I found that the fucking forums are leaking back to the user our server-side error message (in json), that includes the username & hostname of the db server in the response to our login query! https://stagingforum.opensourceecology.org/entry/signin
{"Code":256,"Exception":"UPDATE command denied to user 'oseforum_user'@'localhost' for table 'GDN_User'|Gdn_Database|Query|update GDN_User User set \n DateLastActive = :DateLastActive,\n DateUpdated = :DateUpdated,\n UpdateIPAddress = :UpdateIPAddress\nwhere UserID = :UserID"}
- And then it created a popup window leaking all this info!
FATAL ERROR IN: Gdn_Database.Query(); "UPDATE command denied to user 'oseforum_user'@'localhost' for table 'GDN_User'" update GDN_User User set DateLastActive = :DateLastActive, DateUpdated = :DateUpdated, UpdateIPAddress = :UpdateIPAddress where UserID = :UserID LOCATION: /var/www/html/forum.opensourceecology.org/htdocs/library/database/class.database.php > 276: > 277: if (!is_object($PDOStatement)) { > 278: trigger_error(ErrorMessage('PDO Statement failed to prepare', $this->ClassName, 'Query', $this->GetPDOErrorMessage($this->Connection()->errorInfo())), E_USER_ERROR); > 279: } else if ($PDOStatement->execute($InputParameters) === FALSE) { >>> 280: trigger_error(ErrorMessage($this->GetPDOErrorMessage($PDOStatement->errorInfo()), $this->ClassName, 'Query', $Sql), E_USER_ERROR); > 281: } > 282: } else { > 283: $PDOStatement = $this->Connection()->query($Sql); > 284: } BACKTRACE: [/var/www/html/forum.opensourceecology.org/htdocs/library/database/class.database.php] PHP::Gdn_ErrorHandler(); [/var/www/html/forum.opensourceecology.org/htdocs/library/database/class.database.php 280] PHP::trigger_error(); [/var/www/html/forum.opensourceecology.org/htdocs/library/database/class.sqldriver.php 1650] Gdn_Database->Query(); [/var/www/html/forum.opensourceecology.org/htdocs/library/database/class.sqldriver.php 1619] Gdn_SQLDriver->Query(); [/var/www/html/forum.opensourceecology.org/htdocs/applications/dashboard/models/class.usermodel.php 865] Gdn_SQLDriver->Put(); [/var/www/html/forum.opensourceecology.org/htdocs/library/core/class.session.php 307] UserModel->Save(); [/var/www/html/forum.opensourceecology.org/htdocs/library/core/class.auth.php 36] Gdn_Session->Start(); [/var/www/html/forum.opensourceecology.org/htdocs/bootstrap.php 168] Gdn_Auth->StartAuthenticator(); [/var/www/html/forum.opensourceecology.org/htdocs/index.php 41] PHP::require_once();
- found the source of the logs being sent to the user as the line:
$Configuration['Garden']['Errors']['MasterView'] = 'deverror.master.php';
- fixed the log leaking by commenting-out the above line & replacing it with:
$Configuration['Garden']['Errors']['LogEnabled'] = TRUE; $Configuration['Garden']['Errors']['LogFile'] = '';
- the above setting forced writes to the apache-defined error logfile at /var/log/httpd/forum.opensourceecology.org/error_log
[root@hetzner2 httpd]# date Thu Jan 25 15:41:25 UTC 2018 [root@hetzner2 httpd]# pwd /var/log/httpd [root@hetzner2 httpd]# tail -f forum.opensourceecology.org/access_log forum.opensourceecology.org/error_log ... ==> forum.opensourceecology.org/error_log <== [Thu Jan 25 15:39:48.700738 2018] [:error] [pid 24269] [client 127.0.0.1:56072] [Garden] /var/www/html/forum.opensourceecology.org/htdocs/library/database/class.database.php, 280, Gdn_Database.Query(), UPDATE command denied to user 'oseforum_user'@'localhost' for table 'GDN_User', update GDN_User User set \n DateLastActive = :DateLastActive,\n DateUpdated = :DateUpdated,\n UpdateIPAddress = :UpdateIPAddress\nwhere UserID = :UserID
- ugh, but the response is actually still leaking info
<h1>FATAL ERROR IN: Gdn_Database.Query();</h1> <div class="AjaxError">"UPDATE command denied to user 'oseforum_user'@'localhost' for table 'GDN_User'" update GDN_User User set DateLastActive = :DateLastActive, DateUpdated = :DateUpdated, UpdateIPAddress = :UpdateIPAddress where UserID = :UserID LOCATION: /var/www/html/forum.opensourceecology.org/htdocs/library/database/class.database.php > 276: > 277: if (!is_object($PDOStatement)) { > 278: trigger_error(ErrorMessage('PDO Statement failed to prepare', $this->ClassName, 'Query', $this->GetPDOErrorMessage($this->Connection()->errorInfo())), E_USER_ERROR); > 279: } else if ($PDOStatement->execute($InputParameters) === FALSE) { >>> 280: trigger_error(ErrorMessage($this->GetPDOErrorMessage($PDOStatement->errorInfo()), $this->ClassName, 'Query', $Sql), E_USER_ERROR); > 281: } > 282: } else { > 283: $PDOStatement = $this->Connection()->query($Sql); > 284: } BACKTRACE: [/var/www/html/forum.opensourceecology.org/htdocs/library/database/class.database.php] PHP::Gdn_ErrorHandler(); [/var/www/html/forum.opensourceecology.org/htdocs/library/database/class.database.php 280] PHP::trigger_error(); [/var/www/html/forum.opensourceecology.org/htdocs/library/database/class.sqldriver.php 1650] Gdn_Database->Query(); [/var/www/html/forum.opensourceecology.org/htdocs/library/database/class.sqldriver.php 1619] Gdn_SQLDriver->Query(); [/var/www/html/forum.opensourceecology.org/htdocs/applications/dashboard/models/class.usermodel.php 865] Gdn_SQLDriver->Put(); [/var/www/html/forum.opensourceecology.org/htdocs/library/core/class.session.php 307] UserModel->Save(); [/var/www/html/forum.opensourceecology.org/htdocs/library/core/class.auth.php 36] Gdn_Session->Start(); [/var/www/html/forum.opensourceecology.org/htdocs/bootstrap.php 168] Gdn_Auth->StartAuthenticator(); [/var/www/html/forum.opensourceecology.org/htdocs/index.php 41] PHP::require_once(); </div>
- unfortunately, after a single login attempt, all future queries of any kind produce the same "Bonk" generic error message now. It appears that the fix is to clear the cookie named "Vanilla" for the domain on the client-side. By default, this cookie sticks-around for 1 month.
# I changed the LogEnabled flag to FALSE. That helped, but then the logs disappeared from the apache logfile & the response still contained the error (though a shorter one)! <pre> {"Code":256,"Exception":"UPDATE command denied to user 'oseforum_user'@'localhost' for table 'GDN_User'|Gdn_Database|Query|update GDN_User User set \n DateLastActive = :DateLastActive,\n DateUpdated = :DateUpdated,\n UpdateIPAddress = :UpdateIPAddress\nwhere UserID = :UserID"}
- I'm pretty fucking over Vanilla. They seem to not give a shit about security.
- Note: the leaks above are nothing more than we've openly disclosed in our specs & docs. However, error messages often contain very sensitive info. Sometimes they contain passwords. Hence, the fact that this is being leaked in general is a huge red flag. The fact that it's considered "normal" behaviour is terrifying. The fact that I googled around for a fix of this issue, and the only thing I found was a developer suggesting to use a browser-side debugger to find the server-side error messages is mortifying. Sending server-side errors to the client should be a opt-in feature. It should not be the default. But Vanilla seems to have it as the baked-in default with not only no way to disable it, but also no obvious way (if at all?) to disable it. This tells me that they're either incompetent or just don't care. That's not what we want from the developers of our tools.## phpbb looks a bit better, they have at least a few blog posts tagged "security" https://blog.phpbb.com/tag/security/
- phpbb has a wiki with a guide recommending to move config.php out of the docroot. Not one of the guides I found for Vanilla mentioned that obvious requirement. They were occupied with permissions, and arguing if taking away the execute bit mattered to php files (read: it doesn't). https://wiki.phpbb.com/Hardening_Tips
- began downloading a static-content snapshot of our forums with wget
[root@hetzner2 wget]# date Thu Jan 25 16:49:39 UTC 2018 [root@hetzner2 wget]# pwd /var/tmp/deleteMeIn2019/oseoforum.20180125/wget [root@hetzner2 wget]# wget -r -p -e robots=off http://forum.opensourceecology.org
- I'm concerned that the output above may be insanely long, since people can permalink to comments within discussions. So effectively we'll store a single thread N times where N is the number of comments in the thread :(
- sent an email to Marcin asking if he was ready to fully archive our fourms to static content
- I discovered that my defined hardened permissions for Wordpress should be improved such that
- All of the files & directories that don't need rw permissions should not have write permissions. That's every file in a wordpress docroot except the folder "wp-content/uploads" and its subfiles/dirs.
- World permissions (not-user && not-group) for all files & directories inside the docroot (and including the docroot dir itself!) should be set to 0 for all files & all directories.
- Excluding 'wp-content/uploads/', these files should also not be owned by the user that runs a webserver (in cent, that's the 'apache' user). For even if the file is set to '0400', but it's owned by the 'apache' user, the 'apache' user can ignore the permissions & write to it anyway.
- Excluding 'wp-content/uploads/', all directories in the docroot (including the docroot dir itself!) should be owned by a group that contains the user that runs our webserver (in cent, that's the apache user). The permissions for this group must be not include write access for files or directories. For even if a file is set to '0040', but the containing directory is '0060', any user in the group that owns the directory can delete the existing file and replace it with a new file, effectively ignoring the read-only permission set for the file.
- currently, our documentation reads:
vhostDir="/var/www/html/www.opensourceecology.org" wpDocroot="${vhostDir}/htdocs" chown -R apache:apache "${vhostDir}" find "${vhostDir}" -type d -exec chmod 0750 {} \; find "${vhostDir}" -type f -exec chmod 0640 {} \; find "${wpDocroot}/wp-content" -type f -exec chmod 0660 {} \; find "${wpDocroot}/wp-content" -type d -exec chmod 0770 {} \; chown apache:apache-admins "${vhostDir}/wp-config.php" chmod 0440 "${vhostDir}/wp-config.php"
- but I believe the (untested) update should be:
vhostDir="/var/www/html/www.opensourceecology.org" wpDocroot="${vhostDir}/htdocs" chown -R not-apache:apache "${vhostDir}" find "${vhostDir}" -type d -exec chmod 0050 {} \; find "${vhostDir}" -type f -exec chmod 0040 {} \; find "${wpDocroot}/wp-content" -type f -exec chmod 0060 {} \; find "${wpDocroot}/wp-content" -type d -exec chmod 0070 {} \; chown not-apache:apache-admins "${vhostDir}/wp-config.php" chmod 0040 "${vhostDir}/wp-config.php"
- ...such that:
- the 'not-apache' user is a new user that doesn't run any software (ie: a daemon such as a web server) and whose shell is "/sbin/nologin" and home is "/dev/null".
- the apache user is now in the apache-admins group
- the result will now be that:
- ...such that:
- a compromised web server can no longer write data to a docroot (ie: adding malicious code to a php script) outside the 'wp-content/uploads/' directory
- for anyone to make changes to any files in the docroot (other than 'wp-content/uploads/'), they must be the root user. I think this is fair if they don't have the skills necessary to become root, they probably shouldn't fuck with the wp core files anyway.
- however, as with before, any user in the 'apache' group can read most files in the docroot. So, if we want an OSE Developer with ssh access to be able to access our server's files read-only, we should add them to the 'apache' group. If we trust them with passwords as well, we should additionally add them to the 'apache-admins' group.
Tue Jan 23, 2018
- found the best way to browse the git sourcecode of the mediawiki core at this URL https://phabricator.wikimedia.org/source/mediawiki/tags/master/
- for example, for v1.30.0 https://phabricator.wikimedia.org/source/mediawiki/browse/REL1_30/
- or for the 'vendor' directory https://phabricator.wikimedia.org/diffusion/MWVD/browse/master/
- using composer to populate the 'vendor' directory does not seem like a practical options with our hardened php config.
- I attempted to download the actual tarball, copy-in the 'vendor' directory, update permissions, and then run the update
# DECLARE VARIABLES source /root/backups/backup.settings stamp="20180120" backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" dbName_hetzner1='osewiki' dbName_hetzner2='osewiki_db' dbUser_hetzner2="osewiki_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/wiki.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs" mkdir /var/tmp/mediawiki-1.30.0 pushd /var/tmp/mediawiki-1.30.0 wget https://releases.wikimedia.org/mediawiki/1.30/mediawiki-1.30.0.tar.gz tar -xzvf mediawiki-1.30.0.tar.gz cd mediawiki-1.30.0 rm -rf /var/www/html/wiki.opensourceecology.org/htdocs/vendor cp -r vendor /var/www/html/wiki.opensourceecology.org/htdocs/ # set permissions chown -R apache:apache "${vhostDir_hetzner2}" find "${vhostDir_hetzner2}" -type d -exec chmod 0750 {} \; find "${vhostDir_hetzner2}" -type f -exec chmod 0640 {} \; find "${docrootDir_hetzner2}/images" -type f -exec chmod 0660 {} \; find "${docrootDir_hetzner2}/images" -type d -exec chmod 0770 {} \; chown apache:apache-admins "${vhostDir_hetzner2}/LocalSettings.php" chmod 0440 "${vhostDir_hetzner2}/LocalSettings.php" chown apache:apache-admins "${docrootDir_hetzner2}/LocalSettings.php" chmod 0440 "${docrootDir_hetzner2}/LocalSettings.php" /var/www/html/wiki.opensourceecology.org/htdocs/maintenance php update.php popd
- now I got a fatal php error
[Tue Jan 23 22:02:07.182288 2018] [:error] [pid 31837] [client 127.0.0.1:37506] PHP Fatal error: Class 'Wikimedia\\WrappedString' not found in /var/www/html/wiki.opensourceecology.org/htdocs/extensions/Gadgets/GadgetHooks.php on line 217
- I changed the require_once() to wfLoadExtension() in LocalSettings.php
- I found that the directory path for the "WrappedString.php" file is distinct for the vendors dirs in [a] the git repo vs [b] the extracted tarball release.
# the tarball has an src dir with the 2 files (WrappedString.php & WrappedStringList.php) [root@hetzner2 htdocs]# ls -lah /var/tmp/mediawiki-1.30.0/mediawiki-1.30.0/vendor/wikimedia/wrappedstring/src/ total 16K drwxr-xr-x 2 501 games 4.0K Dec 8 23:20 . drwxr-xr-x 3 501 games 4.0K Dec 8 23:20 .. -rw-r--r-- 1 501 games 3.5K Dec 8 23:20 WrappedStringList.php -rw-r--r-- 1 501 games 3.3K Dec 8 23:20 WrappedString.php # but the git repo has an src dir with 2 dirs = WrappedString & Wikimedia. Both of *these* dirs have the 2 files https://phabricator.wikimedia.org/diffusion/MWVD/browse/master/wikimedia/wrappedstring/src/ [root@hetzner2 htdocs]# ls -lah /var/tmp/backups_for_migration_from_hetzner1/wiki_20180120/current/core/vendor/wikimedia/wrappedstring/src/ total 16K drwxr-xr-x 4 root root 4.0K Jan 22 21:22 . drwxr-xr-x 3 root root 4.0K Jan 22 21:22 .. drwxr-xr-x 2 root root 4.0K Jan 22 21:22 Wikimedia drwxr-xr-x 2 root root 4.0K Jan 22 21:22 WrappedString [root@hetzner2 htdocs]# ls -lah /var/tmp/backups_for_migration_from_hetzner1/wiki_20180120/current/core/vendor/wikimedia/wrappedstring/src/WrappedString/ total 16K drwxr-xr-x 2 root root 4.0K Jan 22 21:22 . drwxr-xr-x 4 root root 4.0K Jan 22 21:22 .. -rw-r--r-- 1 root root 98 Jan 22 21:22 WrappedStringList.php -rw-r--r-- 1 root root 90 Jan 22 21:22 WrappedString.php [root@hetzner2 htdocs]# ls -lah /var/tmp/backups_for_migration_from_hetzner1/wiki_20180120/current/core/vendor/wikimedia/wrappedstring/src/Wikimedia/ total 16K drwxr-xr-x 2 root root 4.0K Jan 22 21:22 . drwxr-xr-x 4 root root 4.0K Jan 22 21:22 .. -rw-r--r-- 1 root root 3.6K Jan 22 21:22 WrappedStringList.php -rw-r--r-- 1 root root 3.4K Jan 22 21:22 WrappedString.php [root@hetzner2 htdocs]#
Mon Jan 22, 2018
- Marcin found that editing posts on the stagingforum failed with "You don't have permission to access /vanilla/post/editdiscussion/X on this server."
- I coorelated this to a 403 error caused by modsecurity
--e381b03e-A-- [22/Jan/2018:14:02:11 +0000] WmXu49lQF2aUKXkcaav20AAAAAU 127.0.0.1 52050 127.0.0.1 8000 --e381b03e-B-- POST /vanilla/post/editdiscussion/237 HTTP/1.0 X-Real-IP: 173.234.159.236 X-Forwarded-Proto: https X-Forwarded-Port: 443 Host: stagingforum.opensourceecology.org Content-Length: 3125 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:48.0) Gecko/20100101 Firefox/48.0 Accept: application/json, text/javascript, */*; q=0.01 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate, br Content-Type: application/x-www-form-urlencoded X-Requested-With: XMLHttpRequest Referer: https://stagingforum.opensourceecology.org/vanilla/post/editdiscussion/237 Cookie: Vanilla=20082-1519221520%7C48d7a44f72d9d86b5e70a0da0e4b83c8%7C1516629520%7C20082%7C1519221520; Vanilla-Volatile=20082-1516802320%7C2271357054a693999ba33b7013e7c80e%7C1516629520%7C20082%7C1516802320 DNT: 1 X-Forwarded-For: 173.234.159.236, 127.0.0.1 X-Varnish: 34288 --e381b03e-C-- Discussion%2FTransientKey=82P0S34XB3Q0&Discussion%2Fhpt=&Discussion%2FDiscussionID=237&Discussion%2FDraftID=0&Discussion%2FName=Roof&Discussion%2FCategoryID=1&Discussion%2FBody=I+think+we+need+to+address+what+seems+to+be+a+hole+in+the+GVCS+-+constructing+a+good+roof.+Shelter+is+one+of+the+big+requirements+we+have+to+cover%2C+and+living+in+the+pacific+northwest%2C+I+am+painfully+aware+of+the+importance+and+costs+associated+with+a+roof.+Compressed+earth+blocks+cover%2C+to+a+decent+extent%2C+building+walls.+Floors+are+somewhat+covered+by+the+dimensional+saw+mill%2C+compressed+earth+blocks%2C+and+concrete.+However%2C+what+isn't+taken+into+account+yet+is+the+most+expensive+and+difficult+part+of+a+house+-+the+roof.%3Cbr%3E%3Cbr%3EI+think+figuring+out+a+good+roofing+solution+should+be+one+of+the+primary+focal+points+for+the+%3Cu%3E%3Ca+href%3D%22http%3A%2F%2Fopenfarmtech.org%2Fforum%2Fdiscussion%2Fcomment%2F861%23Comment_861%22%3Efirst+wave%3C%2Fa%3E%3C%2Fu%3E+-+getting+the+basics+of+housing%2Ffood%2Fwater%2Fpower+up+and+running%2C+as+discussed+in+the+%3Cu%3E%3Ca+href%3D%22http%3A%2F%2Fopenfarmtech.org%2Fforum%2Fdiscussion%2F227%2Fbootstrapping-the-gvcs-priorities%22%3EGVCS+bootstrapping%3C%2Fa%3E%3C%2Fu%3E+thread.%3Cbr%3E%3Cbr%3EA+good+example+of+this+need+is+the+Factor+E+Farm+workshop+itself.+To+make+a+workshop+to+stage+the+development+of+the+GVCS%2C+they+were+able+to+make+part+of+the+structure+out+of+compressed+earth+block+columns.+However%2C+the+vast+majority+of+the+cost+of+the+structure+was+in+building+the+roof.+They+had+to+use+off-the-shelf+metal+roofing%2C+which+was+paid+for+through+donations%3A+%3Cu%3E%3Ca+href%3D%22http%3A%2F%2Fopenfarmtech.org%2Fweblog%2F2010%2F11%2Fconstruction-time%2F%22%3Edonations+for+roofing+materials%3C%2Fa%3E%3C%2Fu%3E%2C+%3Cu%3E%3Ca+href%3D%22http%3A%2F%2Fopenfarmtech.org%2Fweblog%2F2010%2F11%2Fconclusion-of-building-for-2010%2F%22%3Ebuilding+the+roof%3C%2Fa%3E%3C%2Fu%3E.%3Cbr%3E%3Cbr%3EAs+you+can+see%2C+building+a+roof+was+one+of+the+first+roadblocks%2C+and+was+solved+through+just+buying+one.%3Cbr%3E%26nbsp%3B%3Cbr%3EPerhaps+we+can+brainstorm+and+come+up+with+some+good+solutions.+I've+already+started+some+sections+and+given+some+background+requirements+on+the+wiki.+One+option+I+find+very+intriguing+myself+is+ferrocement.+You+can+see+it+being+discussed+in+the+%3Cu%3E%3Ca+href%3D%22http%3A%2F%2Fopenfarmtech.org%2Fweblog%2F2010%2F11%2Fconclusion-of-building-for-2010%2F%23comments%22%3Ecomments%3C%2Fa%3E%3C%2Fu%3E+of+the+building+the+roof+blog+entry+for+the+farm.+I+made+a+thread+about+the+need+for+an+%3Cu%3E%3Ca+href%3D%22http%3A%2F%2Fopenfarmtech.org%2Fforum%2Fdiscussion%2F214%22%3Eopen+source+metal+lath+making+machine%3C%2Fa%3E%3C%2Fu%3E+with+roofing+as+one+of+its+applications.%3Cbr%3E%3Cbr%3EI've+started+a+wiki+entry+for+the+roof+here%3A+%3Cu%3Ehttp%3A%2F%2Fopenfarmtech.org%2Fwiki%2FRoof%3C%2Fu%3E+As+we+come+up+with+ideas+we+can+add+them.%3Cbr%3E%3Cbr%3EEdit%3A+This+is+Michael+attempting+to+edit+a+roof+post.%3Cbr%3E&Checkboxes%5B%5D=Announce&Checkboxes%5B%5D=Closed&Discussion%2FTags=&DeliveryType=VIEW&DeliveryMethod=JSON&Discussion/Save=Save --e381b03e-F-- HTTP/1.1 403 Forbidden Content-Length: 233 Connection: close Content-Type: text/html; charset=iso-8859-1 --e381b03e-E-- --e381b03e-H-- Message: Access denied with code 403 (phase 2). Pattern match "\\W{4,}" at ARGS:Discussion/Body. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_40_generic_attacks.conf"] [line "37"] [id "960024"] [rev "2"] [msg "Meta-Character Anomaly Detection Alert - Repetative Non-Word Characters"] [data "Matched Data: > - found within ARGS:Discussion/Body: I think we need to address what seems to be a hole in the GVCS - constructing a good roof. Shelter is one of the big requirements we have to cover, and living in the pacific northwest, I am painfully aware of the importance and costs associated with a roof. Compressed earth blocks cover, to a decent extent, building walls. Floors are somewhat covered by the dimensional saw mill, compressed earth blocks, and concrete. However, what isn't taken into acc..."] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "8"] Action: Intercepted (phase 2) Stopwatch: 1516629731152710 755 (- - -) Stopwatch2: 1516629731152710 755; combined=355, p1=104, p2=236, p3=0, p4=0, p5=14, sr=20, sw=1, l=0, gc=0 Response-Body-Transformed: Dechunked Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9. Server: Apache Engine-Mode: "ENABLED" --e381b03e-Z--
- realized that most of the whitelisting was done to just the 'wp-admin' dir, so I made it site-wide on the forums
- Catarina reported successfully that she was able to upload 3M images to fef && also connect via sftp using filezilla.
- marked fef migration as complete CHG-2018-01-03
- moved LocalSettings.php out of the document root per these hardening guides
[root@hetzner2 htdocs]# date Mon Jan 22 20:26:16 UTC 2018 [root@hetzner2 htdocs]# pwd /var/www/html/wiki.opensourceecology.org/htdocs [root@hetzner2 htdocs]# cat LocalSettings.php <?php # including separate file that contains the database password so that it is not stored within the document root. # For more info see: # * https://www.mediawiki.org/wiki/Manual:Security # * https://wiki.r00tedvw.com/index.php/Mediawiki/Hardening $docRoot = dirname( FILE ); require_once "$docRoot/../LocalSettings.php"; ?> [root@hetzner2 htdocs]#
- this also required me to change the original LocalSettings.php file (now outside the docroot in /var/www/html/wiki.opensourceecology.org) so that IP is inside of 'htdocs'
# If you customize your file layout, set $IP to the directory that contains # the other MediaWiki files. It will be used as a base to locate files. if( defined( 'MW_INSTALL_PATH' ) ) { $IP = MW_INSTALL_PATH; } else { $IP = dirname( FILE ) . "/htdocs" ; }
- decided to move the logo to 'images/' instead of just the base docroot to simplify future updates
- installed composer per the upgrade guide https://www.mediawiki.org/wiki/Manual:Upgrading
- which linked to https://www.mediawiki.org/wiki/Download_from_Git#Fetch_external_libraries
- else, I got this error
MediaWiki 1.30 internal error Installing some external dependencies (e.g. via composer) is required. External dependencies MediaWiki now also has some external dependencies that need to be installed via composer or from a separate git repo. Please see mediawiki.org for help on installing the required components.
- and the actual install is
yum install composer
- and I had to add /usr/share/php/ to open_basedir in /etc/php.ini
- the composer steps themselves are included in the code block below
- found that several of our MediaWiki extensions are no longer maintained
- TreeAndMenu https://www.mediawiki.org/wiki/Extension:TreeAndMenu
- RSSReader https://www.mediawiki.org/wiki/Extension:RSS_Reader
- google-coop https://www.mediawiki.org/wiki/Extension_talk:Google_Custom_Search_Engine
- Mtag https://www.mediawiki.org/wiki/MediaWiki_and_LaTeX_on_a_host_with_shell_access
- PayPal https://www.mediawiki.org/wiki/Extension:Paypal
- Flattr https://www.mediawiki.org/wiki/Extension:Flattr
- JSWikiGnatt https://www.mediawiki.org/wiki/Extension:JSWikiGantt
- ProxyConnect https://www.mediawiki.org/wiki/Extension:ProxyConnect
- FreeMind https://www.mediawiki.org/wiki/Extension:FreeMind
- commented-out the entire function fnAddPersonalUrls() a section from LocalSettings.php that caused the following error
[Mon Jan 22 22:25:45.130218 2018] [:error] [pid 26913] [client 127.0.0.1:50368] PHP Fatal error: Call to undefined method User::getSkin() in /var/www/html/wiki.opensourceecology.org/LocalSettings.php on line 421
- upgraded mediawiki install, including a switch to a git-backed core download to make future updates easier
- removed googleAnalytics require from LocalSettings.php, because privacy matters && awstats works.
# DECLARE VARIABLES source /root/backups/backup.settings stamp="20180120" backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" dbName_hetzner1='osewiki' dbName_hetzner2='osewiki_db' dbUser_hetzner2="osewiki_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/wiki.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs" # clone the latest stable version of the Mediawiki core code from git pushd ${backupDir_hetzner2}/current time nice git clone https://gerrit.wikimedia.org/r/p/mediawiki/core.git pushd core latestStableMediawikiVersion=`git tag -l | sort -V | grep -E '^[0-9\.]*$' | tail -n1` git checkout "${latestStableMediawikiVersion}" git clone https://gerrit.wikimedia.org/r/p/mediawiki/vendor.git # extensions pushd extensions git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/CategoryTree.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/ConfirmAccount.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/ConfirmEdit.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Cite.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/ParserFunctions.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Gadgets.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/ReplaceText.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Renameuser.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/UserMerge.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Nuke.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Widgets.git pushd Widgets git submodule init git submodule update popd #cp -r ../../../old/htdocs/extensions/TreeAndMenu . #cp -r ../../../old/htdocs/extensions/RSSReader . #cp -r ../../../old/htdocs/extensions/google-coop.php #cp -r ../../../old/htdocs/extensions/Mtag.php #cp -r ../../../old/htdocs/extensions/PayPal.php #cp -r ../../../old/htdocs/extensions/Flattr #cp -r ../../../old/htdocs/extensions/JSWikiGantt #cp -r ../../../old/htdocs/extensions/ProxyConnect popd # skins pushd skins git clone https://gerrit.wikimedia.org/r/p/mediawiki/skins/CologneBlue.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/skins/Modern.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/skins/MonoBook.git git clone https://gerrit.wikimedia.org/r/p/mediawiki/skins/Vector.git popd popd # copy core into the docroot mv ${vhostDir_hetzner2}/* ${backupDir_hetzner2}/old/ mkdir "${docrootDir_hetzner2}" time nice rsync -av --progress core/ "${docrootDir_hetzner2}/" # copy back essential content rsync -av --progress "../old/LocalSettings.php" "${vhostDir_hetzner2}/" rsync -av --progress "../old/htdocs/LocalSettings.php" "${docrootDir_hetzner2}/" rsync -av --progress "../old/htdocs/images" "${docrootDir_hetzner2}/" # set permissions chown -R apache:apache "${vhostDir_hetzner2}" find "${vhostDir_hetzner2}" -type d -exec chmod 0750 {} \; find "${vhostDir_hetzner2}" -type f -exec chmod 0640 {} \; find "${docrootDir_hetzner2}/images" -type f -exec chmod 0660 {} \; find "${docrootDir_hetzner2}/images" -type d -exec chmod 0770 {} \; chown apache:apache-admins "${vhostDir_hetzner2}/LocalSettings.php" chmod 0440 "${vhostDir_hetzner2}/LocalSettings.php" chown apache:apache-admins "${docrootDir_hetzner2}/LocalSettings.php" chmod 0440 "${docrootDir_hetzner2}/LocalSettings.php"
- having some 404 issues with images
<td><a href="http://www.shuttleworthfoundation.org/"><img alt="Shuttleworth funded-02---web.jpg" src="/wiki/images/thumb/b/be/Shuttleworth_funded-02---web.jpg/200px-Shuttleworth_funded-02---web.jpg" width="200" height="55" srcset="/wiki/images/thumb/b/be/Shuttleworth_funded-02---web.jpg/300px-Shuttleworth_funded-02---web.jpg 1.5x, /wiki/images/thumb/b/be/Shuttleworth_funded-02---web.jpg/400px-Shuttleworth_funded-02---web.jpg 2x" /></a>
- the above links to (400)
- but it should link to (200)
- I logged into the hetzner1 konsleH config & tried to pull up the apache config there (since we can't see it in ssh on this annoying shared server). In the wui, click 'opensourceecology.org' -> Services -> Server Configuration -> Click the yellow wrench next to public_html -> Enhanced View:
Options All -Indexes # Site-wide redirects #Redirect 301 /community http://community.opensourceecology.org #Redirect 301 /forum http://forum.opensourceecology.org #Redirect 301 /weblog http://blog.opensourceecology.org #RedirectMatch 301 /weblog/contact-us http://openfarmtech.org/wiki/Contact_us # Redirect 301 /weblog/contact-us http://openfarmtech.org/wiki/Contact_us # http://openfarmtech.org/w/index.php?title=CiviCRM_tech_notes&action=edit&redlink=1 # RedirectMatch 301 /CiviCRM http://openfarmtech.org/civicrm RewriteEngine On RewriteCond %{HTTP_HOST} ^www\. RewriteRule ^(.*)$ http://opensourceecology.org/$1 [R=301,L,QSA] #RewriteRule ^civicrm/(.*)$ /community/civicrm/$1 [PT,L,QSA] RewriteRule ^wiki/(.*)$ /w/index.php?title=$1 [PT,L,QSA] #RewriteRule ^wiki/(.*:.*)$ wiki/index.php?title=$1 [L,QSA] RewriteRule ^wiki/(.*:.*)$ /w/index.php?title=$1 [L,QSA] RewriteRule ^wiki/*$ /w/index.php [L,QSA,T=text/html] RewriteRule ^index.php/(.*)$ /wiki/$1?old-url=slash [R=permanent,L,QSA] # RewriteRule ^/*$ /w/index.php [L,QSA] # RewriteLog "/home/marcin_ose/openfarmtech.org/rewrite.log" # RewriteLogLevel 3 # See http://docs.jboss.org/jbossweb/3.0.x/proxy-howto.html # ProxyPass /OpenKM http://localhost:8080/OpenKM # ProxyPassReverse /OpenKM http://localhost:8080/OpenKM # RewriteRule ^OpenKM/(.*)$ http://localhost:8080/OpenKM/$1 [P] # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress
- actually, just setting `$wgScriptPath = ""` in LocalSettings.php fixed everything!!
- attempting to login produces an error message
Database error A database query error has occurred. This may indicate a bug in the software.[WmZ1UeWiTZqK5j1xUV@YaQAAAAo] 2018-01-22 23:35:46: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"
- err, I found my password leaked in cleartext to /var/log/httpd/modsec_audit.log
- added "wpPassword" to sanitiseArg list in /etc/httpd/modexecurity.d/do_not_log_passwords.conf
- err, I found my password leaked in cleartext to /var/log/httpd/modsec_audit.log
[root@hetzner2 modsecurity.d]# date Mon Jan 22 23:46:06 UTC 2018 [root@hetzner2 modsecurity.d]# cd /etc/httpd [root@hetzner2 httpd]# cat modsecurity.d/do_not_log_passwords.conf ################################################################################ # File: modsecurity.d/do_not_log_passwords.conf # Version: 0.2 # Purpose: Defines a list of POST arguments that contain passwords and instructs # modsecurity to sanitise-out the values of these variables (with **s) # when logging to files (ie: /var/log/httpd/modsec_audit.log) # Author: Michael Altfield <michael@opensourceecology.org> # Created: 2017-12-16 # Updated: 2018-01-22 ################################################################################ SecAction "nolog,phase:2,id:131,sanitiseArg:password,sanitiseArg:Password,sanitiseArg:wpPassword,sanitiseArg:newPassword,sanitiseArg:oldPassword,sanitiseArg:pwd" [root@hetzner2 httpd]#
- it's possible that the login issue is due to a pending update.php execution required, but I'm getting an error when I attempt to run it
[root@hetzner2 maintenance]# php update.php PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 715 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 674 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/maintenance/Maintenance.php on line 715 PHP Warning: putenv() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/Setup.php on line 53 PHP Warning: is_dir(): open_basedir restriction in effect. File(/tmp) is not within the allowed path(s): (/home/wp/.wp-cli:/usr/share/pear:/var/lib/php/tmp_upload:/var/lib/php/session:/var/www/html/www.openbuildinginstitute.org:/var/www/html/staging.openbuildinginstitute.org/:/var/www/html/www.opensourceecology.org/:/var/www/html/fef.opensourceecology.org/:/var/www/html/seedhome.openbuildinginstitute.org:/var/www/html/oswh.opensourceecology.org/:/var/www/html/forum.opensourceecology.org/:/var/www/html/wiki.opensourceecology.org/:/usr/share/php/Composer) in /var/www/html/wiki.opensourceecology.org/htdocs/includes/libs/filebackend/fsfile/TempFSFile.php on line 90 PHP Notice: Undefined index: SERVER_NAME in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 1507 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/PHPSessionHandler.php on line 126 PHP Warning: ini_set() has been disabled for security reasons in /var/www/html/wiki.opensourceecology.org/htdocs/includes/session/PHPSessionHandler.php on line 127 PHP Notice: Undefined index: SERVER_NAME in /var/www/html/wiki.opensourceecology.org/htdocs/includes/GlobalFunctions.php on line 1507 MediaWiki 1.30.0 Updater oojs/oojs-ui: 0.25.1 installed, 0.23.0 required. wikimedia/ip-set: 1.2.0 installed, 1.1.0 required. wikimedia/relpath: 2.1.1 installed, 2.0.0 required. wikimedia/remex-html: 1.0.2 installed, 1.0.1 required. wikimedia/running-stat: 1.2.1 installed, 1.1.0 required. wikimedia/wrappedstring: 2.3.0 installed, 2.2.0 required. Error: your composer.lock file is not up to date. Run "composer update --no-dev" to install newer dependencies [root@hetzner2 maintenance]#
- I can't run composer, since it requires some unsafe, disabled php functions. That's why I chose to grab them via git (`git clone https://gerrit.wikimedia.org/r/p/mediawiki/vendor.git`). But apparently it gave me old versions?
- re-reading the error message, it says I have too new of versions!
- looks like the branches aren't maintained; we want REL1_30 but it isn't there
[root@hetzner2 vendor]# date Tue Jan 23 00:01:57 UTC 2018 [root@hetzner2 vendor]# pwd /var/tmp/backups_for_migration_from_hetzner1/wiki_20180120/current/core/vendor [root@hetzner2 vendor]# git branch -r origin/HEAD -> origin/master origin/fundraising/REL1_27 origin/master origin/wmf/1.31.0-wmf.15 origin/wmf/1.31.0-wmf.16 origin/wmf/1.31.0-wmf.17 [root@hetzner2 vendor]#
Sat Jan 20, 2018
- the db backup & file backup of the wiki finished in a little over an hour
... osemain@dedi978:~/tmp/backups_for_migration_to_hetzner2/wiki_20180120/current$ time nice tar -czvf ${backupDir_hetzner1}/current/${backupFileName_files_hetzner1} ${vhostDir_hetzner1} ... real 71m2.031s user 17m36.700s sys 1m38.868s osemain@dedi978:~/tmp/backups_for_migration_to_hetzner2/wiki_20180120/current$
- I initiated an scp of the data to hetnzer2
# DECLARE VARIABLES source /root/backups/backup.settings stamp="20180120" backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/wiki_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/wiki_${stamp}" backupFileName_db_hetzner1="mysqldump_wiki.${stamp}.sql.bz2" backupFileName_files_hetzner1="wiki_files.${stamp}.tar.gz" dbName_hetzner1='osewiki' dbName_hetzner2='osewiki_db' dbUser_hetzner2="osewiki_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/wiki.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs" # STEP 1: COPY FROM HETZNER1 mkdir -p ${backupDir_hetzner2}/{current,old} mv ${backupDir_hetzner2}/current/* ${backupDir_hetzner2}/old/ scp -P 222 osemain@dedi978.your-server.de:${backupDir_hetzner1}/current/* ${backupDir_hetzner2}/current/
- transfer of hetzner1 backups to hetzner2 finished after 8 minutes
- added wiki.opensourceecology.org A record in cloudflare DNS to 138.201.84.243
- disabled CDN/DNS proxy in cloudflare for the 'network' subdomain
- decreased TTL of all DNS entries to lowest time possible = 2 minutes for all entries except 'www', 'opensourceecology.org', and 'blog' (all 3 use the CDN for now, and are thus required to be set to TTL = 'automatic')
- created/updated necessary config files for forum.opensourceecology.org
- /etc/httpd/conf.d/00-wiki.opensourceecology.org.conf
- /etc/varnish/sites-enabled/wiki.opensourceecology.org
- /etc/varnish/all-vhosts.vcl
- /etc/nginx/conf.d/wiki.opensourceecology.org.conf
- created necessary dirs
- /var /www/html/wiki.opensourceecology.org/htdocs
- /var/log/httpd/wiki.opensourceecology.org
- /var/log/nginx/wiki.opensourceecology.org
- updated /etc/php.ini to include "/var/www/html/forum.opensourceecology.org/" in /etc/php.in's "open_basedir"
- added wiki SAN to opensourceecology.org cert
certbot -nv --expand --cert-name opensourceecology.org certonly -v --webroot -w /var/www/html/fef.opensourceecology.org/htdocs/ -d fef.opensourceecology.org -w /var/www/html/www.opensourceecology.org/htdocs -d osemain.opensourceecology.org -w /var/www/html/oswh.opensourceecology.org/htdocs/ -d oswh.opensourceecology.org -w /var/www/html/forum.opensourceecology.org/htdocs -d stagingforum.opensourceecology.org -w /var/www/html/wiki.opensourceecology.org/htdocs -d wiki.opensourceecology.org /bin/chmod 0400 /etc/letsencrypt/archive/*/pri* nginx -t && service nginx reload
- created osewiki_db mysql database on hetzner2 from dump from hetzner1
- copied htdocs files on hetzner2 from backup on hetzner1
- updated LocalSettings.php with new db name & credentials
- updated permissions of docroot files
- commented-out $wgScriptPath & $wgStylePath to remove the "/w/" subdir in LocalSettings.php
- removed the "/w/" subdir prefix from $wgLogo in LocalSettings.php
- commented-out $wgServer & $wgFavicon to prevent a 301 redirect back to the naked domain opensourceecology.org
- commented-out $wgCacheDirectory & $wgUseFileCache. And `rm -rf cache`. This directory is *supposed* to be located outside the docroot for security reasons. But we won't need this feature as we use varnish https://www.mediawiki.org/wiki/Manual:Security#File_permissions
- added "Alias /wiki /var/www/html/wiki.opensourceecology.org/htdocs/index.php" to /etc/httpd/conf.d/00-wiki.opensourceecology.org
- commented-out debugging lines in LocalSettings.php
################################################################ # Debugging # error_reporting(E_ALL | E_STRICT); # error_reporting(E_ALL); # ini_set("display_errors", 1); # $wgShowExceptionDetails = true; ## Verbose output to user #$wgShowSQLErrors = true; #$wgDebugDumpSql = true; #$wgDebugLogFile = "/usr/home/osemain/public_html/logs/wiki-error.log"; #$wgDebugLogFile = "/home/oft_site/logs/wiki-error.log"; # $wgDebugRawPage = true; # $wgDebugComments = true; ################################################################
- saw some errors for generating temporary thumbnails
Error creating thumbnail: Unable to save thumbnail to destination
- attempted setting the image directory to be writeable, to no avail
find "${docrootDir_hetzner2}/images" -type f -exec chmod 0660 {} \; find "${docrootDir_hetzner2}/images" -type d -exec chmod 0770 {} \;
- began investigating the guide to install Mediawiki via git https://www.mediawiki.org/wiki/Download_from_Git
time nice git clone https://gerrit.wikimedia.org/r/p/mediawiki/core.git pushd ${backupDir_hetzner2}/current
- determined that the latest version of Mediawiki is v1.30.0, and that we're currently running v1.24.2 https://wiki.opensourceecology.org/wiki/Special:Version
Thr Jan 18, 2018
- fixed awstats cron job & config files
- confirmed that a db dump includes image tags with the domain-name hard-coded in the href. that was put in-place by Wordpress's "Add Media" wui button. That's not good; the links should be relative in the db!
- did some research & found that wp core devs decided 7 years ago to keep absolute paths. This is especially bad for continuous integration or even a basic staging site
* https://core.trac.wordpress.org/ticket/17048
- there's no good, robust solution.
* https://stackoverflow.com/questions/17187437/relative-urls-in-wordpress * https://wordpress.org/plugins/relative-url/ * https://wordpress.org/plugins/root-relative-urls/ * https://deluxeblogtips.com/relative-urls/ * http://www.456bereastreet.com/archive/201010/how_to_make_wordpress_urls_root_relative/
- take away is:
- Let wordpress do its thing. Don't waste effort fighting wp when it auto-inserts an absolute path.
- However, whenever you have to manually type a path in (ie: when configuring a widget, plugin nav bar, etc), please use a relative link.
- attempted to fix the "Http error" reported by wordpress after attempting to upload a large image
- using the browser debugger, I saw that it was nginx that returned a 413 error. I fixed this by increasing 'client_max_body_size' to '10M' in /etc/nginx/nginx.conf
[root@hetzner2 dbChange.20180118_12:22:36]# grep -B 1 'client_max_body_size' /etc/nginx/nginx.conf # allow large posts for image uploads #client_max_body_size 1k; #client_max_body_size 900k; client_max_body_size 10M;
- next, I got a 403 error from /wp-admin/async-upload.php
- /var/log/httpd/fef.opensourceecology.org/error_log shows a modsecurity issue:
- next, I got a 403 error from /wp-admin/async-upload.php
==> /var/log/httpd/fef.opensourceecology.org/error_log <== [Thu Jan 18 14:56:25.263164 2018] [:error] [pid 27682] [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Match of "eq 0" against "MULTIPART_UNMATCHED_BOUNDARY" required. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_20_protocol_violations.conf"] [line "219"] [id "960915"] [rev "1"] [msg "Multipart parser detected a possible unmatched boundary."] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.9"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/INVALID_REQ"] [tag "CAPEC-272"] [hostname "fef.opensourceecology.org"] [uri "/wp-admin/async-upload.php"] [unique_id "WmC1mU7FUiUY6HdrzSWfWgAAAAA"]
- as above, whitelisted rule IDs:
- 960915, multipart_unmatched_boundary
- 200003, multipart_unmatched_boundary
- as above, whitelisted rule IDs:
- moved '/usr/home/osemain/public_html/archive/addon-domains/opensourcewarehouse.org' to '/usr/home/osemain/noBackup/deleteMeIn2019/oswh_olddocroot'
- added a 301 redirect from 'http://opensourcewarehouse.org' to 'https://oswh.opensourceecology.org' in new file = '/usr/home/osemain/public_html/archive/addon-domains/opensourcewarehouse.org/index.php'
Sat Jan 13, 2018
- Meeting with Marcin
Fri Jan 12, 2018
- finished configuring oswh wp plugins
- determined that the oswh hack was responsible for injecting pop-ups for a "windows defender" phishing site on some subset of page loads
- gained access to oseforums, oseblog, osecivi users via ssh on our non-dedicated server (hetzner1)
- checked /etc/passwd & found another 8x org-specific users with home directories that I couldn't access still
- emailed hetzner for advice on how to gain ssh access to these users' home directories
addon:x:1011:1011:addontest.opensourceecology.org:/usr/home/addon:bin/false oseirc:x:1018:1018:irc.opensourceecology.org:/usr/home/oseirc:bin/false oseholla:x:1019:1019:holla.opensourceecology.org:/usr/home/oseholla:bin/false osesurv:x:1020:1020:survey.opensourceecology.org:/usr/home/osesurv:/bin/bash sandbox:x:1021:1021:sandbox.opensourceecology.org:/usr/home/sandbox:/bin/false microft:x:1022:1022:test.opensourceecology.org:/usr/home/microft:/bin/bash zabbix:x:118:121::/var/run/zabbix:/bin/false openswh:x:1023:1023:opensourcewarehouse.org:/usr/home/openswh:/bin/false
- created backup of oseforum's docroot (58M) & db (44M). Both sizes are bzip2'd.
# DECLARE VARIABLES source /root/backups/backup.settings stamp="20180113" backupDir_hetzner1="/usr/home/oseforum/tmp/backups_for_migration_to_hetzner2/oseforum_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/oseforum_${stamp}" backupFileName_db_hetzner1="mysqldump_oseforum.${stamp}.sql.bz2" backupFileName_files_hetzner1="oseforum_files.${stamp}.tar.gz" dbName_hetzner1='oseforum' dbName_hetzner2='oseforum_db' dbUser_hetzner2="oseforum_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/forum.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs"
Mon Jan 08, 2018
- tried installing the fresh version of the Eventor theme to the website running the old wp core, but it was still broken
- reverted to the docroot I backed-up before attempting to try the outdated wp core && installed _that_ Eventor theme to the fresh version 1.7 that was just downloaded, and the site actually worked!
- finally, attempted the wp-cli commands to update themes & plugins && install our minimal set of sec plugins. The site was still functional after
Fri Jan 05, 2018
- investigation of minor fef issues
Thr Jan 04, 2018
- downloaded the Eventor theme v 1.7, thanks to Simone's contact with Themes Kingdom
- Hetzner responded saying we can use WebFTP to uplaod to $HOME by clicking "the server at the top"
- Marcin responded with some issues with osemain's ephemeral clone
- Catarina found some linking issues in fef
- I brought the site down & did a string replacement for all occurrences of 'http://opensourceecology.org/fef' to '/', brought the site back up, and asked Catarina to check again
- updated documentation at Wordpress#replace_strings_everywhere_in_wp_database_backend
Wed Jan 03, 2018
- migrated fef to hetzner2 CHG-2018-01-03
- updated statuscake for obi to hit 'https://www.openbuildinginstitute.org'
- updated statuscake for fef to hit 'https://fef.opensourceecology.org'
- ensured that ssh was activated for all domains/users on our (apparently dedicated, per hetzner support) hetzner1 server (but without root access) via the konsoleh site -> click on the server -> Account Management -> SSH access -> Select domain (for each) -> Next
- the kosoleh wui only allowed editing files in the docroot, not the user's home-dir, which prevented me from actually adding my ssh pubic key to $HOME/.ssh/authorized_keys file
- I emailed hetzner support back asking if [a] they could just add my pub key to all our user account's authorized_keys files or [b] tell me how I could reset all the user's passwords
- oswh was cannibalized by a virus & is awaiting a fresh version of the theme. the forums is awaiting access to the user account. I'm now going to work on beginning the migration of osemain
- it looks like the relevant files are heztern1:/usr/home/osemain/public_html/, except the following subdirs:
- archive
- w
- logs
- mediawiki-1.24.2.extra
- the entire dir is 23G. Excluding the above, it's ~ 0.7G
- it looks like the relevant files are heztern1:/usr/home/osemain/public_html/, except the following subdirs:
#################### # run on hetzner1 # #################### # STEP 0: CREATE BACKUPS source /usr/home/osemain/backups/backup.settings /usr/home/osemain/backups/backup.sh # when finished, SSH into the dreamhost server to verify that the whole system backup was successful before proceeding bash -c 'source /usr/home/osemain/backups/backup.settings; ssh $RSYNC_USER@$RSYNC_HOST du -sh backups/hetzner1/*' # DECLARE VARIABLES source /usr/home/osemain/backups/backup.settings stamp=`date +%Y%m%d` backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/osemain_${stamp}" backupFileName_db_hetzner1="mysqldump_osemain.${stamp}.sql.bz2" backupFileName_files_hetzner1="osemain_files.${stamp}.tar.gz" vhostDir_hetzner1='/usr/www/users/osemain/' dbName_hetzner1='ose_osemain' dbUser_hetzner1="${mysqlUser_osemain}" dbPass_hetzner1="${mysqlPass_osemain}" # STEP 1: BACKUP DB mkdir -p ${backupDir_hetzner1}/{current,old} pushd ${backupDir_hetzner1}/current/ mv ${backupDir_hetzner1}/current/* ${backupDir_hetzner1}/old/ time nice mysqldump -u"${dbUser_hetzner1}" -p"${dbPass_hetzner1}" --all-databases | bzip2 -c > ${backupDir_hetzner1}/current/${backupFileName_db_hetzner1} # STEP 2: BACKUP FILES time nice tar -czvf ${backupDir_hetzner1}/current/${backupFileName_files_hetzner1} --exclude="${vhostDir_hetzner1}logs" --exclude="${vhostDir_hetzner1}w" --exclude="${vhostDir_hetzner1}archive" --exclude="${vhostDir_hetzner1}mediawiki-1.24.2.extra" ${vhostDir_hetzner1}
- the gz-compressed tarball generated from above was 353M.
# DECLARE VARIABLES source /root/backups/backup.settings #stamp=`date +%Y%m%d` stamp="20180103" backupDir_hetzner1="/usr/home/osemain/tmp/backups_for_migration_to_hetzner2/osemain_${stamp}" backupDir_hetzner2="/var/tmp/backups_for_migration_from_hetzner1/osemain_${stamp}" backupFileName_db_hetzner1="mysqldump_osemain.${stamp}.sql.bz2" backupFileName_files_hetzner1="osemain_files.${stamp}.tar.gz" dbName_hetzner1='ose_osemain' dbName_hetzner2='osemain_db' dbUser_hetzner2="osemain_user" dbPass_hetzner2="CHANGEME" vhostDir_hetzner2="/var/www/html/www.opensourceecology.org" docrootDir_hetzner2="${vhostDir_hetzner2}/htdocs"
- created domain name 'osemain.opensourceecology.org' for testing the osemain site on hetzner2
- using above vars, I followed the guide to migrate the files & db data from hetzner1 to hetzner2 Wordpress#migrate_site_from_hetzner1_to_hetzner2
- created necessary files & dirs:
- /etc/httpd/conf.d/00-www.opensourceecology.org.conf
- /etc/varnish/sites-enabled/www.opensourceecology.org
- /etc/nginx/conf.d/www.opensourceecology.org.conf
- this file has a temporary override for the 'Host' header passed to varnish, since the staging url is going to be 'osemain.opensourceecology.org' but the prod site will be 'opensourceecology.org'
- /var/log/httpd/www.opensourceecology.org
- /var/log/nginx/www.opensourceecology.org
- updated necessary files
- /etc/varnish/all-vhosts.vcl
- /etc/php.ini
- finished setting up ephemeral clone of osemain at https://osemain.opensourceecology.org
- sent email to Marcin & Catarina for validation
Tue Jan 02, 2018
- got an email from Simone Cicero stating that she emailed Themes Kingdom for a clean copy of Eventor 1.7
- emailed back-and-forth with hetzner
- learned that the forums are in /usr/www/users/oseforum/
- learned that we have a bunch of users on this box, and it might even be dedicated just for us (though without root access)
osemain@dedi978:~$ grep 'ose' /etc/group users:x:100:osemain,addon,osecivi,oseblog,oseforum,oseirc,oseholla,osesurv,sandbox,microft,openswh osemain:x:1010: osecivi:x:1014: oseblog:x:1015: oseforum:x:1016: oseirc:x:1018: oseholla:x:1019: osesurv:x:1020:
- but I couldn't actually access the home dirs of the other users through 'osemain'
osemain@dedi978:~$ date Tue Jan 2 16:21:13 CET 2018 osemain@dedi978:~$ ls -lah /usr/home/ ls: cannot open directory /usr/home/: Permission denied osemain@dedi978:~$ ls -lah /usr/home/addon ls: cannot open directory /usr/home/addon: Permission denied osemain@dedi978:~$ ls -lah /usr/home/osecivi ls: cannot open directory /usr/home/osecivi: Permission denied osemain@dedi978:~$ ls -lah /usr/home/oseblog ls: cannot open directory /usr/home/oseblog: Permission denied osemain@dedi978:~$ ls -lah /usr/home/oseirc ls: cannot open directory /usr/home/oseirc: Permission denied osemain@dedi978:~$ ls -lah /usr/home/oseforum ls: cannot open directory /usr/home/oseforum: Permission denied osemain@dedi978:~$ ls -lah /usr/home/osesurv ls: cannot open directory /usr/home/osesurv: Permission denied osemain@dedi978:~$ ls -lah /usr/home/openswh ls: cannot open directory /usr/home/openswh: Permission denied
- so I asked hetzner support to add the 'osemain' user to all the other users groups listed above, and I asked them to find any other accounts that we own that I may have missed