CloudPanel website causing “Too many redirects”

I have installed CloudPanel and the new website caused a “Too many redirects” bug. This is because my SSL certificates are controlled by a proxy and this can cause some confusion between the systems. Also, because CloudPanel installs its own certificates.

This application can also install a Let’s Encrypt certificate, but this works only in more conventional systems. Mine is going through a DNS to a Proxy that listens to a certain IP address and that proxy redirects the request to a Virtual Machine on one of my servers.

So, here is my, probably unconventional method of disabling the SSL certificates on my CloudPanel installation:

  1. Open the CloudPanel controlpanel.
  2. Select the website you want to edit
  3. Choose the Vhost tab
  4. Change the following code into the new code:
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
{{ssl_certificate_key}}
{{ssl_certificate}}
server_name subdomain.3xn.nl;
{{root}}

{{nginx_access_log}}
{{nginx_error_log}}

if ($scheme != "https") {
rewrite ^ https://$host$uri permanent;
}
server {
listen 80;
listen [::]:80;
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# {{ssl_certificate_key}}
# {{ssl_certificate}}
server_name subdomain.3xn.nl;
{{root}}

{{nginx_access_log}}
{{nginx_error_log}}

# if ($scheme != "https") {
# rewrite ^ https://$host$uri permanent;
# }

Done! Your website should now say “Hello world :-)”

You can see that I have disabled the listen to port 443, the certificate keys, the forced https and the path to the keys. I chose to switch off the forced HTTP, because my proxy is already taking care of that.

This post is subject to change, but this helps you along your way!

Loading

How to update Mastodon to a new version

Updating Mastodon and creating backups are important steps to ensure the security and stability of your instance. Here’s a comprehensive tutorial on how to update Mastodon, including making backups of the database and assets:

Note: Always perform updates on a test/staging instance before applying them to your live instance. This tutorial assumes you have some basic knowledge of the command line and server administration.

Click here for the backup steps. It basically comes down to a Database dump, a settings file backup and a Redis dump. If you wish to backup your assets like images and stuff (User-uploaded files), backup the folder named “public/system”. Keep in mind that this folder can be rather large. Actually, it can become rather massive.

After a good 90 minutes, I gave up on trying to show you how large the asset folder is. So beware if you are going to make a backup of it. Perhaps you can just skip the cache folder?

You can always check the folder size by using NCDU, for which you can find the manual here. Also, installations may vary, but this is an example of my instance.


Upgrade procedure.

  1. su - mastodon
  2. cd /home/mastodon/live
  3. git fetch --tags
  4. git checkout [type the most recent version here, starting with the letter v. For example; v4.0.1
    
    Command example: git checkout v4.0.1
  5. bundle install
  6. yarn install
  7. RAILS_ENV=production bundle exec rails db:migrate
  8. RAILS_ENV=production bundle exec rails assets:precompile
  9. reboot now

And that should be it!

If you don’t want to restart your server, use the following commands instead of “reboot now”:

  1. exit
  2. systemctl restart mastodon-sidekiq
  3. systemctl reload mastodon-web

    The reload operation is a zero-downtime restart, also called a “phased restart”. As such, Mastodon upgrades usually do not require any advance notice to users about planned downtime. In rare cases, you can use the restart operation instead, but there will be a (short) felt interruption of service for your users.

  4. The streaming API server is also updated and requires a restart; doing so will result in all connected clients being disconnected, which can increase the load on your server:
systemctl restart mastodon-streaming

Done!

Loading

A working Apache2 server with PHP7.4

I was in need of a server solution that could be quickly deployed as a VM.

      1. Install Debian 11 as a VM with web- and SSH server
      2. Create a USER next to your root account during the installation
      3. Find the IP address of the new installation. The easiest is if you have NoVNC running. Log in as USER and type
        ip a
      4. Time to so the sudo thing
        su

        log in as root

        apt-get update && apt-get install -y sudo
        usermod -aG sudo USER
        exit
        exit

        log back in as USER

      5. Okay, let’s install some more stuff but first we do an update
        sudo apt-get update && sudo apt-get upgrade -y

        Now we want some essentials

        sudo apt-get install -y dirmngr gnupg2 nano wget gpg curl fail2ban ufw software-properties-common

        Preparing the PHP install

        wget -q https://packages.sury.org/php/apt.gpg -O- | sudo apt-key add -
        echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list
        sudo apt-get update
        sudo apt-get install -y php7.4 libapache2-mod-php7.4 php7.4-mysql php7.4-curl php7.4-gd php7.4-mbstring php7.4-xml php7.4-xmlrpc php7.4-zip

        And restart the Apache2 Webserver

        sudo systemctl restart apache2
      6. Alright, that’s done. Next step is to test things.
        sudo nano /var/www/html/test.php

        Enter this into the php file and press Control X and type Y to save and exit.

        <?php
        // Show all PHP information
        phpinfo();
        ?>
      7. Go to the IP address of the server you just created and type
        HTTP://IP ADDRESS/test.php
        

        If you see a PHP page with all sorts of data, you’re good. If not, go fix. Don’t ask me, I’m not there yet!

Loading

Brutally brief: Create a new database and or user in MYSQL

Log in:

mysql -u root -p

Create a user:

create user 'newuser'@'localhost' IDENTIFIED BY 'password';

Give them all the power:

grant all privileges on * . * TO 'newuser'@'localhost';

Reload privileges:

flush privileges;

Ditch a user: (Optional)

drop user 'newuser'@'localhost';

Log out:

\q

—————-

Log in:

mysql -u newuser -p

Create a database for the user:

create database db_name;

List databases: (Optional)

show databases;

Ditch database: (Optional)

drop database db_name;

Log out:

\q

Done.

Loading

How to fix “Maintenance Mode” in NextCloud

Super annoying when you get locked out, innit?

It depends a bit how you have installed NextCloud but here are two possible locations for your config file:

/var/www/htdocs/nextcloud/config/config.php 
("regular" apache install)
/config/www/nextcloud/config/config.php
(when using a Docker install)

Open the config.php file and look for

 'maintenance' => true,

And change this to

 'maintenance' => false,

Done.

Loading

Archivebox Docker superuser root issues

Since unraid forum is throwing a hissyfit with its captcha thing, I’ll just post it on my own website.

Issue:

root@<containername>:/data# archivebox manage createsuperuser
[i] [2021-11-11 14:29:07] ArchiveBox v0.6.2: archivebox manage createsuperuser
> /data

[i] ArchiveBox should never be run as root!
For more information, see the security overview documentation:
https://github.com/ArchiveBox/ArchiveBox/wiki/Security-Overview#do-not-run-as-root

Oh noes. Open the unraid console and type:

$ sudo docker exec -it --user archivebox <containername> /bin/bash

If you do not know the name of your docker, open the console panel of the archivebox docker and search in the url for “container=”. The number behind that is the name of your docker.

So your prompt looks like this:

archivebox@<containerid>:/data$

Then run:

archivebox manage createsuperuser

Sauce: https://forums.unraid.net/topic/95296-run-docker-as-another-user/?do=findComment&comment=993988

Loading

How to GIT

UPDATE

git pull

make new local branch:

git checkout -b [name]

CHANGES
make changes, then add all:

git add .

commit:

git commit -m "comment"

push changes and create new branch:

git push

TO GET THE LATEST VERSION

git checkout master
git pull
git checkout [USER]
git rebase master

Step 1. Fetch and check out the branch for this merge request

git fetch origin
git checkout -b [USER] origin/[USER]

Step 2. Review the changes locally

Step 3. Merge the branch and fix any conflicts that come up

git fetch origin
git checkout origin/master
git merge --no-ff [USER]

Step 4. Push the result of the merge to GitLab

git push origin master

(In this case [USER] is foxsan. And you are not him.)

Loading

IMAPSYNC for Debian 8 installation

  1. apt-get update
  2. apt-get upgrade
  3. apt-install git libjson-webtoken-perl libauthen-ntlm-perl libcgi-pm-perl libcrypt-openssl-rsa-perl libdata-uniqid-perl libfile-copy-recursive-perl libio-socket-inet6-perl libio-socket-ssl-perl libio-tee-perl libhtml-parser-perl libjson-webtoken-perl libmail-imapclient-perl libparse-recdescent-perl libmodule-scandeps-perl libreadonly-perl libregexp-common-perl libsys-meminfo-perl libterm-readkey-perl libtest-mockobject-perl libtest-pod-perl libunicode-string-perl liburi-perl libwww-perl libtest-nowarnings-perl libtest-deep-perl libtest-warn-perl make cpanminus
  4. cd /home
  5. git clone https://github.com/imapsync/imapsync.git
  6. cd imapsync
  7. chmod +x imapsync
  8. Test it by typing
    ./imapsync
  9. You may need to install some extras by entering
    cpanm File::Tail
  10. cp imapsync /usr/bin/

Done!

Item 10 is to make sure you can use this command anywhere on the server. Have fun!

Loading