I had the opportunity to use AWS CodeDeploy with Jenkins to deploy and update applications running on Tomcat nodes, these are usually deployed under the ‘$CATALINA_BASE/webapps’ directory of Tomcat.

There were two factors that I had to consider when I implemented a solution, the problems were.

A single tomcat node may receive single/multiple .war files generated from one or multiple source repositories.
Next, Ansible is used when the target servers are first brought up using Terraform, so Ansible would have deployed specific version of ‘.war’ files, this makes code deploy to stop without removing “user” installed files, as it considers anything apart from itself as user driven event and will stop with an error, this may seem strange initially, but is the expected and sane behavior.

So my implementation needs to cover these two.

For selective removal and deployment I had to collect a list of generated ‘.war’ files in the Jenkins pipeline for shell scripts that I wrote and executed by Code Deploy during during its hooks stages. (Refer AWS documentation for CD hooks). Or remove all the ‘*.war’ files in the target host(be careful with this option!).

Tomcat needs to be stopped, the required ‘.war’ files removed and after removal of ‘.war’ files Tomcat will remove the appropriate directories extracted from earlier file when it is started again, place new ‘.war’ files and the deployment is over.

 

 

I tried to install sysPass before this, but it has many issues that became a hurdle on both Debian 10 and FreeBSD 12. I then moved to TeamPass.

Debian 10 installation is pretty straightforward, you can just follow the official guide, there were some changes required in the name of packages listed(7.3 instead of PHP 5|7.1, etc), but it is close to the official documentation. I was able to get it working(reached initial setup and maintenance mode) on both Debain 10 and FreeBSD 12.

https://teampass.readthedocs.io/en/latest/install/install-linux/

https://github.com/nilsteampassnet/teampass_doc/blob/master/docs/install/install-linux.md

Unless specified run the below commands as root, or superuser with sudo or doas(in BSDs).

# pkg install apache24 mariadb104-server mariadb104-client php74 php74-mysqli php74-pecl-mcrypt php74-curl php74-gd php74-xml php74-bcmath

 

I had to install some more packages after I observed failures(errors) in /var/log/httpd-error.log, it had some stack traces and I had to search around to get the names of the exact package names:

# pkg install mod_php74 php74-filter php74-session php74-openssl php74-mbstring php74-json php74-iconv php74-ldap

After installing the above packages you will have to restart http server _if_ it was started/running.

# service apache24 onerestart

Configure the php module, this is the difference between Debian and FreeBSD, in FreeBSD we have to take this extra step of configuring the loading of module, otherwise the .php files will not be interpreted.

Add the following lines to file:
/usr/local/etc/apache24/modules.d/001_mod-php.conf

<IfModule dir_module>
    DirectoryIndex index.php index.html
    <FilesMatch "\.php$">
           SetHandler application/x-httpd-php
     </FilesMatch>
     <FilesMatch "\.phps$">
        SetHandler application/x-httpd-php-source
    </FilesMatch>
</IfModule>

Again, if the server was already running, you will have to restart it for the changes to reflect. I got the above from below reference:

https://www.digitalocean.com/community/tutorials/how-to-install-an-apache-mysql-and-php-famp-stack-on-freebsd-12-0

Check whether the config is error free before we start the webserver:

# service apache24 configtest

Initialize the DB, you need to start the server.

# service mysql-server onestart

Run the mysql secure post installation script.

#  mysql_secure_installation

And now join from the root account:

# mysql -uroot -p

mysql-shell> create database teampass character set utf8 collate utf8_bin;

mysql-shell> grant all privileges on teampass.* to teampass_admin@localhost identified by 'PASSWORD';

 

Note down the PASSWORD and the username, here it is teampass_admin.

You will need to add the IP and fqdn in the hosts file so that it resolves to the hostname of the host and the edit the ServerName directive in /usr/local/etc/apache24/httpd.conf

Like:

192.168.122.11 fbsd0

Start and enable the services to start during host startup:

# sysrc apache24_enable="YES"
apache24_enable: -> YES

# sysrc mysql-server="YES"
mysql-server: server -> server

You can now connect to the IP/host from a browser(in my case http://fbsd0/teampass) and configure TeamPass!

 

Generic detailed installation steps have been skipped to reduce the verbosity and focus on specific changes.

ELK installation:

——————–

First install the ELK components using the official documentation:

https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html

NOTE: There are multiple ways to install them, use the Debian package method as it makes it easy to upgrade later, tracking/managing packages is easy and scripting becomes simple.

 

ELK configuration:

———————–

Elasticsearch changes

We changed following settings in /etc/elasticsearch/jvm.options to make is suitable for production:

# Xms represents the initial size of total heap space

# Xmx represents the maximum size of total heap space

-Xms2g
-Xmx2g

#Force production checks

-Des.enforce.bootstrap.checks=true

Minimum and Maximum heap size, setting them equal to avoid heap resize pauses, the maximum should be less than 32GB!

https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

We forced bootstrap checks to true with a single node installation, ELK does considers single node installation as a development environment, but they allow usage and recommend to force bootstrap checks

-Des.enforce.bootstrap.checks=true

References:

https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html

Note: Before you start logstash, change the default template that comes with ES, otherwise it will create indexes with five shards, too many for installation with less than three/four nodes

Kibana Changes

None, except enabling monitoring from the WEB-UI.

Logstash changes

Enable persistent queue, this will help us avoid using one more layer of buffering and offer data durability.

/etc/logstash/logstash.yml

queue.type: persisted

#Reference

https://www.elastic.co/guide/en/logstash/6.3/persistent-queues.html#backpressure-persistent-queue

For logstash we added a new .conf file under /etc/logstash/conf.d/

/etc/logstash/conf.d/logstash-basic.conf

input {
  beats {
    	port => "5044"
  }
}
output {
  elasticsearch {
    	hosts       	=> ["localhost:9200"]
#   	index       	=> "%{host}-%{+YYYY.MM.dd}"
#   	template_name   => "%{host}-template"
  }
}

Filebeat configuration:

—————————

Following are the contents of filebeat.yml:

---
filebeat.inputs:
- type: log
  paths:
	- /opt/apache-tomcat-*/logs/catalina.out
  multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
  multiline.negate: false
  multiline.match: after
  document_type: tomcat

- type: log
  paths:
	- /home/*/logs/some-other-log.log
  document_type: someothername
output.logstash:
  hosts:
	- "elk.some.ip.here:5044"

NOTE: The above hosts IP address changes based on the env.

Filebeat reference for consolidating Multiline Java stack traces into one event:

https://www.elastic.co/guide/en/beats/filebeat/current/_examples_of_multiline_configuration.html

Filebeat reference for log inputs

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html

 

Setting up curator on Elasticsearch node:

—————————————————-

The FLOSS version of ELK offering stores the indices and metrics indefinitely, we need to use a utility called curator to clean up older indices which are not required anymore.

This requires pip3, which is for python3 and common on newer GNU/Linux releases:

sudo apt install -y python3-pip

Curator is now part of ELK project once you add the ELK repo it can be installed using OS package manager. Or it can be installed using pip.

Like:

$ pip3 install elasticsearch-curator –user

Once installed create the curator config under the home directory of the normal user:

$ mkdir .curator

And a logs directory under it.

$ mkdir .curator/logs

Now create two .yml files with following content inside the above directory.

.curator/curator.yml

---
# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
client:
  hosts:
	- 127.0.0.1
  port: 9200
  url_prefix:
  use_ssl: False
  timeout: 30
  master_only: False

logging:
  loglevel: INFO
  logfile: '/home/ubuntu/.curator/logs/curator.log'
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']

 

Reference:

https://www.elastic.co/guide/en/elasticsearch/client/curator/current/configfile.html

 

Next, create a .yml file for deleting indices older than eight days:

.curator/delete-indices.yml

---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
  1:
	action: close
	description: Close indices older than X days based on index name.
	options:
  	delete_aliases: False
  	disable_action: False
	filters:
	- filtertype: age
  	source: name
  	direction: older
  	timestring: '%Y.%m.%d'
  	unit: days
  	unit_count: 8
  2:
	action: delete_indices
	description: >-
  	Delete indices older than X days (based on index name), for logstash-
  	prefixed indices. Ignore the error if the filter does not result in an
  	actionable list of indices (ignore_empty_list) and exit cleanly.
	options:
  	ignore_empty_list: True
  	disable_action: False
	filters:
	- filtertype: pattern
  	kind: prefix
  	value: logstash-
	- filtertype: age
  	source: name
  	direction: older
  	timestring: '%Y.%m.%d'
  	unit: days
  	unit_count: 8

Reference:

https://www.elastic.co/guide/en/elasticsearch/client/curator/current/ex_delete_indices.html

Set up a cron job under /etc/cron.d to run daily to run the curator as a normal user.

Like:

/etc/cron.d/es-curator

MAILTO=”alert-email-id@email.com”

@daily ubuntu . $HOME/.profile; curator ~/.curator/delete-indices.yml

The curator setup is now complete, modify the .yml files as necessary.

You can run command curator_cli show_indices to list the indices present at the moment.

 

Basic http authentication using Nginx:

————————————————-

sudo apt install apache2-utils nginx-light

Create a htpasswd file:

# htpasswd -c /etc/nginx/kibana-htpasswd zubairs

Ensure proper owner and group permissions to the above file, otherwise run:

# chown root:www-data /etc/nginx/kibana-htpasswd

Add more users as necessary.

Next configure Nginx configuration to act as a reverse proxy:

/etc/nginx/conf.d/kibana.conf

server {
listen *:80;
server_name elk.node.example.com;
location / {
 proxy_pass http://localhost:5601;
 auth_basic "Restricted";
 auth_basic_user_file /etc/nginx/kibana-htpasswd;
 }
}

Finally restart Nginx after checking the configuration file:

# nginx -t

# systemctl restart nginx

 

This post details the setup necessary to get Jenkins work with Bitbucket webhooks and the steps involved:

 

Install Jenkins

Install Jenkins LTS, that is what I have used, on FreeBSD it is pkg repositories, on Debian/Ubuntu you need to configure the repository once, please follow below instructions:

https://pkg.jenkins.io/debian-stable/

 

Jenkins requirements to have it ready to work with Bitbucket webhooks:

  • Jenkins reachable from public Bitbucket APIs/IPs address.
  • Jenkins having a user account enabled with API token to accept incoming triggers.
  • SSH key of jenkins node having read only access to Bitbucket repos.
  • Jenkins instance having appropriate roles attached to make use of AWS services/resources, if necessary, otherwise not required.

Configure Jenkins

  • Create a user with API token to authenticate, save the token, it will be used later.

user-api-token-creation

 

  • Add/Modify the user privileges to read/build and workspace read permissions in the matrix:

 

user-privileges-matirx

 

 

Create and configure Jenkins Job/Pipeline

  • Create either a freestyle or a pipeline job and modify Build triggers like below screen:

job-build-triggers

 

Read the note below the text field and you get two types of URLs to use:

One without any parameters, like JENKINS_URL/job/my-job/build?token=TOKEN_NAME

Another with ‘buildWithParameters’ which allows the request call to pass in parameters to be used, like JENKINS_URL/job/my-job/buildWithParameters?token=TOKEN_NAME&paramone=FIRST&paramtwo=SECOND

 

This helps you to trigger a jenkins job/pipeline by a webhook and pass parameters required for the task, like say in a parameterized job/build.

 

Make Jenkins reachable from public network:

With Jenkins jobs created and configured, it must be reachable from public or the many endpoints/IPs of BitBucket. In my case I am not using any reverse proxy, so my installation is on TCP:8080.

NOTE:  Take special care, like using strong passwords on Jenkins user accounts, patching jenkins to latest stable release and limiting privileges of the account created for automation. This is necessary as jenkins will be exposed to the Internet. Otherwise your Jenkins setup is at risk, as at some point the Jenkins installation may get compromised due to weak passwords or vulnerabilities!

Configure Bitbucket webhooks:

 

With the Jenkins setup complete we can now configure Bitbucket to send requests on certain events. Bitbucket configuration is easy compared to Jenkins as the settings are pretty intuitive.

Go to the settings of the repository, click on ‘Webhooks’, this will list the available webhooks, you need to add a new one and select the required triggers.

add-a-new-webhook

 

Now the Jenkins URL needs some addition, recall that there was 35 character API token that we generated while creating a user account in jenkins for automation, add this token in the URL before the FQDN/IP address, like:

JENKINS_URL/job/my-job/build?token=TOKEN_NAME

Will become:

http://JENKINS_USER_NAME:35_char_API_TOKEN@Jenkins_FQDN_IP:8080/job/my-job/build?token=TOKEN_NAME

Paste the URL of this sort, select the appropriate triggers and save the webhook settings.

You can now test whether the webhook triggers

This post details on how to use Remmina RDP SSH tunnel with Pre and Post scripts on Debian, with little/no changes this script and the instructions should work on FreeBSD and other GNU/Linux distributions.

Remmina has ‘Pre’ and ‘Post’ command support to execute a script before it connect and after it disconnects, respectively.  We can use these features to workaround certain issues, or any other maintenance. For instance in my case on a Debian 9 XFCE node, Remmina had issues with in built SSH tunnel options, whereas the tunnel was working fine when I create it manually from shell and make Remmina connect to it.

On advice of a Remmina contributor(​antenore) over IRC, I created a very basic shell script that takes arguments to start and stop a tunnel.
I also made changes to my ~/.ssh/config and /etc/hosts files to support this setup, I placed a host entry in /etc/hosts and placed appropriate config to support connecting to ssh tunnel node with single command, like:

 

# Content of /etc/hosts  :

xx.xxx.xxx.xx my-tunnel-node

# Content of ~/.ssh/config: 

Host my-tunnel-node
Hostname xx.xxx.xxx.xx
User remoteuser
IdentityFile ~/remote-ssh-key.pem

Now to connect I could use ssh my-tunnel-node and it also removes the overhead of remembering the connection details, with all this set, I then changed Remmina settings.
The RDP connection setting should look like below picture, note the IP address 172.10.1.159 is the remote Windows/RDP node and replace ‘TUNNEL_IP_HERE’ with the actual remote SSH tunnel node, it could be a IP address/FQDN/connect string like user@nodeip.

remmina-rdp-edit

Below is the content of the script(rdp-tunnel.sh) file, place it under your home directory and update the Remmina settings accordingly.

#!/bin/sh

scriptname="$(basename $0)"

if [ $# -lt 3 ] 
 then
    echo "Usage: $scriptname start | stop  RDP_NODE_IP  SSH_NODE_IP"
    exit
fi

case "$1" in

start)

  echo "Starting tunnel to $3"
  ssh -M -S ~/.ssh/$scriptname.control -fnNT -L 3389:$2:3389  $3
  ssh -S ~/.ssh/$scriptname.control -O check $3
  ;;

stop)
  echo "Stopping tunnel to $3"
  ssh -S ~/.ssh/$scriptname.control -O exit $3 

 ;;

*)
  echo "Did not understand your argument, please use start|stop"
  ;;

esac

References:

SSH using Control Master:
https://stackoverflow.com/questions/2241063/bash-script-to-setup-a-temporary-ssh-tunnel/15198031#15198031

 

How to get IPSEC/L2TP VPN working on Ubuntu with network manager GUI:

This is already documented, you can follow the following post:
http://blog.z-proj.com/enabling-l2tp-over-ipsec-on-ubuntu-16-04/

Just a note on the above post, I did not install custom xl2tpd version like mentioned in the above post on my Lubuntu 16.04 box and I went with the stock xl2tpd provided in the repos and it worked fine. In fact I did not compile anything, apart from using the PPA and installing whatever it pulled in.

 

 

In this post I will detail how I used Debian 9 to connect to corporate VPN based on IPSEC/L2TP from the CLI.
The other VPNs which can be connected using OpenVPN and Cisco Openconnect are fairly straight forward to work with and I never had any trouble with them before. But some organizations that we work with use this type of VPN. I wanted to achieve this without any GUI and using only CLI as I have stopped using Network-Manager.

Further, I wanted to make this work on both FreeBSD and Debian as these are my OSs of choice. Network Manager does not support FreeBSD yet.
Note that FreeBSD 11 and onward has kernel support built in  for this VPN stack/protocol, in older releases you will need to use a custom kernel with patches applied to get this working. I will focus on Debian 9 in this post and perhaps the next post will be on FreeBSD 11, if I get it working.

I have tried real hard to make it work using CLI tools, but it did not work causing much frustration, so I used Lubuntu 16.04 VM to connect using the GUI and get the content of the config files which worked and mirror the config setup on the other VMs, along with the help from different posts shared below in references.

 

How to get IPSEC/L2TP VPN working on Debain 9:

The IT guy provided me with:

A username and password, my LDAP and account details.
The URL of the VPN to connect to.
A secret/PSK(pre shared key).

What I need in addition to above was the hash, encryption scheme used, etc which we will collect below, other than these I used the default values provided by the respective software.

As root install:

root shell> apt-get install -y strongswan xl2tp ppp ike-scan

ike-scan is for determining the remote VPN server settings related to authentication.

Run it on the target server, where you need to connect:

root shell> ike-scan YOUR_VPN_URL_OR_IP_HERE.COM

Starting ike-scan 1.9.4 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/)
 3x.xxx.xxx.xxx Main Mode Handshake returned HDR=(CKY-R=e7f46fcf375e22e3) SA=(Enc=3DES Hash=SHA1 Auth=PSK
 Group=2:modp1024 LifeType=Seconds LifeDuration(4)=0x00007080)

Ending ike-scan 1.9.4: 1 hosts scanned in 0.635 seconds (1.58 hosts/sec). 1 returned handshake; 0 returned notify

 

You will need the above details to configure strongswan/openswan/libreswan:

 

Edit /etc/ipsec.conf, add following, I am pasting the snippet from my configuration:

config setup
  # strictcrlpolicy=yes
  # uniqueids = no

# Add connections here.

conn myvpn
  auto=add
  type=transport
  authby=psk
  keyingtries=0
  left=%defaultroute
  leftprotoport=udp/l2tp
  right=3x.xxx.xxx.xxx
  rightid=%any
  rightprotoport=udp/l2tp
  keyexchange=ikev1
  ike=3des-sha1-modp1024
  esp=3des-sha1

 

Values for ike and esp vary according to the setup, use ike-scan to determine these and/or consult the IT person to get these, if all fails, connect from GUI and check the values after successful connection.

Next edit and add the pre-shared key(PSK/secret) /etc/ipsec.secrets:
Important! Ensure you echo the line instead of manually adding it, I have spent few days debugging around when I manually edited the file!

root shell> echo ': PSK "YOUR_PSK_OR_SECRET_HERE"' >> /etc/ipsec.secrets

 

You can now test whether this work by restarting strongswan service:

root shell> systemctl -u strongswan.service

In another terminal check the logs using

root shell> journalctl -u strongswan.service

Jan 13 15:06:14 debian charon[6503]: 00[LIB] dropped capabilities, running as uid 0, gid 0
 Jan 13 15:06:14 debian charon[6503]: 00[JOB] spawning 16 worker threads
 Jan 13 15:06:14 debian ipsec[6489]: charon (6503) started after 20 ms
 Jan 13 15:06:14 debian ipsec_starter[6489]: charon (6503) started after 20 ms
 Jan 13 15:06:14 debian charon[6503]: 05[CFG] received stroke: add connection 'myvpn'
 Jan 13 15:06:14 debian charon[6503]: 05[CFG] added configuration 'myvpn'

Now run

root shell> ipsec status

Security Associations (0 up, 0 connecting):
 none




root shell> ipsec up myvpn
 .
 .
 .
 sending packet: from 10.0.2.15[4500] to 3x.xxx.xxx.xxx[4500] (220 bytes)
 received packet: from 3x.xxx.xxx.xxx[4500] to 10.0.2.15[4500] (172 bytes)
 parsed QUICK_MODE response 150100366 [ HASH SA No ID ID NAT-OA NAT-OA ]
 connection 'myvpn' established successfully

In the other terminal where journalctl -u strongswan.service, you should see something like:

Jan 13 15:08:26 debian charon[6503]: 06[NET] received packet: from 3x.xxx.xxx.xxx[4500] to 10.0.2.15[4500] (92 bytes)
 Jan 13 15:08:26 debian charon[6503]: 06[ENC] parsed ID_PROT response 0 [ ID HASH V ]
 Jan 13 15:08:26 debian charon[6503]: 06[IKE] received DPD vendor ID
 Jan 13 15:08:26 debian charon[6503]: 06[IKE] IKE_SA myvpn[1] established between 10.0.2.15[10.0.2.15]...3x.xxx.xxx.xxx[3x.xxx.xxx.xxx]
 Jan 13 15:08:26 debian charon[6503]: 06[IKE] IKE_SA myvpn[1] established between 10.0.2.15[10.0.2.15]...3x.xxx.xxx.xxx[3x.xxx.xxx.xxx]

 

Check the status with ipsec status/statusall:

root shell> ipsec status
 Security Associations (1 up, 0 connecting):
 myvpn[1]: ESTABLISHED 3 minutes ago, 10.0.2.15[10.0.2.15]...3x.xxx.xxx.xxx[3x.xxx.xxx.xxx]
 myvpn{1}: INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: c641102f_i 03118698_o
 myvpn{1}: 10.0.2.15/32[udp/l2f] === 3x.xxx.xxx.xxx/32[udp/l2f]

 

Now, stop the service and move on with configuring other components.

Configure xl2tpd:

Edit file /etc/xl2tpd/xl2tpd.conf:

[global]
 access control = yes
 port = 1701
 [lac l2tp]
 lns = 3x.xxx.xxx.xxx
 pppoptfile = /etc/ppp/ppp-options.opts
 autodial = yes
 tunnel rws = 8

Now edit file /etc/ppp/ppp-options.opts, you can change the location to something else.

nodetach
 usepeerdns
 noipdefault
 nodefaultroute
 noauth
 noccp
 refuse-eap
 refuse-chap
 refuse-mschap
 refuse-mschap-v2
 lcp-echo-failure 0
 lcp-echo-interval 0
 mru 1400
 mtu 1400
 user YOUR_LDAP_USERNAME_OR_ACCOUNTANME_GIVEN_BY_IT
 password YOUR_ACCOUNT_OR_LDAP_PASSWORD_PROVIDED

Once done start strongswan first then run ipsec up command like above and start xl2tpd service, so as in one line:

systemctl start strongswan.service ; sleep 3; ipsec up myvpn; systemctl start xl2tpd.service

Check whether the connection got established using ipsec statusall.

To stop, run:

systemctl stop xl2tpd.service ; ipsec down myvpn; systemctl stop strongswan.service;

The VPN got setup by we need to add the routing tables inorder for the traffic to flow in and out of VPN:

 

As root user:

route add 3x.xxx.xxx.xxx gw 10.0.2.2
 route add default dev ppp0

So in general:

route add VPN-PUBLIC-IP gw LOCAL-NIC-IP
 route add default dev pppX

Here 10.0.2.2 is the local IP my VM received from NAT of Virtalbox service, in your case change this accordingly.

Check using a fetch/curl/wget command and you should see the the public IP address of the remote network, like:

wget -qO- https://canihazip.com/s

or,

curl https://canihazip.com/s

To change back to non-VPN setup:

1. Change routing table to what it was before,
2. Stop xl2tpd and strongswan services.

To delete the added routes:

route del default dev ppp0
route del 3x.xxx.xxx.xxx gw 10.0.2.2

 

To understand what happens, before you configure, check the routing tables and current network setup on your local machine, this is just to get an understanding, or for troubleshooting the setup, not necessary for the actual setup.

Pre-connection routing table:

$ netstat -nr4

Kernel IP routing table
 Destination Gateway Genmask Flags MSS Window irtt Iface
 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3
 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
 192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8

 

$ ip route

default via 10.0.2.2 dev enp0s3 proto static metric 100
 10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100
 192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.8 metric 100

Network address/link/device configuration:

$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 08:00:27:85:30:8b brd ff:ff:ff:ff:ff:ff
 inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
 valid_lft 78860sec preferred_lft 78860sec
 inet6 fe80::a00:27ff:fe85:308b/64 scope link
 valid_lft forever preferred_lft forever
 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 08:00:27:67:7e:ad brd ff:ff:ff:ff:ff:ff
 inet 192.168.56.8/24 brd 192.168.56.255 scope global enp0s8
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:fe67:7ead/64 scope link
 valid_lft forever preferred_lft forever

 

Compare the above output to routing/networking information after connection.

Post connection routing table:

$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 08:00:27:85:30:8b brd ff:ff:ff:ff:ff:ff
 inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
 valid_lft 78910sec preferred_lft 78910sec
 inet6 fe80::a00:27ff:fe85:308b/64 scope link
 valid_lft forever preferred_lft forever
 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 08:00:27:67:7e:ad brd ff:ff:ff:ff:ff:ff
 inet 192.168.56.8/24 brd 192.168.56.255 scope global enp0s8
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:fe67:7ead/64 scope link
 valid_lft forever preferred_lft forever
 4: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UNKNOWN group default qlen 3
 link/ppp
 inet 10.12.14.147 peer 192.0.2.1/32 scope global ppp0
 valid_lft forever preferred_lft forever



$ netstat -nr4

Kernel IP routing table
 Destination Gateway Genmask Flags MSS Window irtt Iface
 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0
 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3
 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
 3x.xxx.xxx.xxx 10.0.2.2 255.255.255.255 UGH 0 0 0 enp0s3
 192.0.2.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
 192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8

$ ip route

default dev ppp0 proto static scope link metric 50
 default via 10.0.2.2 dev enp0s3 proto static metric 100
 10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100
 3x.xxx.xxx.xxx via 10.0.2.2 dev enp0s3 proto static metric 100
 192.0.2.1 dev ppp0 proto kernel scope link src 10.12.14.147 metric 50
 192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.8 metric 100

 

Edit (on 13 August 2018):

I created a crude shell script to make it simple to connect and disconnect, it has some issues, the connection sometimes is not established due to missing records. I have to run it again to get it connect.

#!/bin/sh

start_oh() {
        echo "Starting VPN services.."
        systemctl start strongswan.service && sleep 1 && ipsec up oh && systemctl start xl2tpd.service;
        echo "Adding required routing records.."
        sleep 1
        route add 38.88.227.130 gw 10.0.2.2
        sleep 1
        route add default dev ppp0
        echo "OH VPN started.."
        return 0
}


stop_oh() {

        echo "removing the VPN routing records.."
        route del default dev ppp0
        sleep 1
        route del 38.88.227.130 gw 10.0.2.2
        systemctl stop xl2tpd.service && ipsec down oh && sleep 1 && systemctl stop strongswan.service
        echo "OH VPN stopped.."
        return 0
}

case $1 in
  start) start_oh ;;
  stop)  stop_oh ;;
  restart) stop_oh && start_oh;;
   *) echo "Invalid command" ;;
esac

References:

https://wiki.archlinux.org/index.php/Openswan_L2TP/IPsec_VPN_client_setup

http://www.jasonernst.com/2016/06/21/l2tp-ipsec-vpn-on-ubuntu-16-04/

https://libreswan.org/wiki/VPN_server_for_remote_clients_using_IKEv1_with_L2TP

 

 

I don’t use a smartphone(yet) and for the recent engagement I was selected for required setup of MFA/2FA by scanning QR code on Bitbucket, Digital Ocean, Github and AWS accounts.

The other troubles were – GitHub did not list India as region where I could setup 2FA using SMS! AWS did not even list any SMS option!
I initially used Python library – pyotp to decode the secret from the base64 encoded string that I got after scanning the QR code with a online/offline tool, but it was not sufficient as the accounts require the user to supply the OPT for new logins.

Enter Authenticator:

https://github.com/Authenticator-Extension/Authenticator

A Chromium and Firefox browser plugin which allows a user to setup 2FA without a smartphone, it even allows scanning the QR code directly from the page that is displayed on the site, and there is also a way to copy paste the secret and register the account. Another advantage is that it works on both of my OSs(Debian and FreeBSD) as it runs in the browser. 🙂

Now I just need to find equivalent addon for Firefox and Seamonkey.

Update 06 May 2019 – This addon is now available for Firefox!
https://addons.mozilla.org/en-US/firefox/addon/auth-helper/

For more reasons why one should avoid smartphones with closed source software, please checkout:

https://www.gnu.org/proprietary/proprietary-surveillance.en.html#SpywareInAndroid
https://www.fsf.org/blogs/community/the-apple-is-still-rotten-why-you-should-avoid-the-new-iphone

 

Excellent laptop for having a wireless chip which is compatible with stock Debian and FreeBSD installation! This is one of the first hardware I have come across where the OS detected the wireless chip during installation.

Next, I used UEFI based dual boot installation and had to manually add the Debian entry in the BIOS setup. FreeBSD EFI partition got detected out of the box, sweet!

The hardware list from lspci on Debian:

00:00.0 Host bridge: Intel Corporation Broadwell-U Host Bridge -OPI (rev 09)
00:02.0 VGA compatible controller: Intel Corporation Broadwell-U Integrated Graphics (rev 09)
00:03.0 Audio device: Intel Corporation Broadwell-U Audio Controller (rev 09)
00:04.0 Signal processing controller: Intel Corporation Broadwell-U Camarillo Device (rev 09)
00:14.0 USB controller: Intel Corporation Wildcat Point-LP USB xHCI Controller (rev 03)
00:16.0 Communication controller: Intel Corporation Wildcat Point-LP MEI Controller #1 (rev 03)
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection (3) I218-LM (rev 03)
00:1b.0 Audio device: Intel Corporation Wildcat Point-LP High Definition Audio Controller (rev 03)
00:1c.0 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express Root Port #1 (rev e3)
00:1c.3 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express Root Port #4 (rev e3)
00:1c.4 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express Root Port #5 (rev e3)
00:1d.0 USB controller: Intel Corporation Wildcat Point-LP USB EHCI Controller (rev 03)
00:1f.0 ISA bridge: Intel Corporation Wildcat Point-LP LPC Controller (rev 03)
00:1f.2 SATA controller: Intel Corporation Wildcat Point-LP SATA Controller [AHCI Mode] (rev 03)
00:1f.3 SMBus: Intel Corporation Wildcat Point-LP SMBus Controller (rev 03)
01:00.0 SD Host controller: O2 Micro, Inc. SD/MMC Card Reader Controller (rev 01)
02:00.0 Network controller: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter (rev 01)

 

On Debian everything works fine, but you might want to remove the intel xorg driver(xserver-xorg-video-intel), as that is for hardware older than 2007, with the old driver installed the graphics were not that smooth and the CPU utilization increased.

Other than this I was unable to suspend to RAM when HT was disabled. Enabling HT in BIOS would solve this.

On FreeBSD, the integrated GPU is not yet supported :(, so just command line for now).

Will consider Dell again for my computing.

While many speak of web servers like Apache or NginX, I wanted to try out lighttpd, I disliked the way NginX Inc is releasing its product, which is Open Core. I prefer something which is completely Libre.

The aim is to deploy Zerobin with whatever PHP version was available on FreeBSD 11. The installation of zerobin itself is simple, we just have to extract the package in the document root of the web server.

Install the required packages:

# pkg install php70 lighttpd

You might want to install php7-gd package in case you are using the gd module.

Once installed, configure lighttpd, there are a few quirks of lighttpd to make it work.

In file /usr/local/etc/lighttpd/lighttpd.conf

Disable IPv6.

server.use-ipv6 = “disable”

If you don’t disable IPv6 when your node is not using it, you will get error messages like “protocol not supported”.

Next, bind the webserver to listen on server IP address and change the server root value if you want to change the default.

server.bind = “192.168.1.18”

We will be using fastcgi module of lighttpd, enable that by un-commenting the entry from /usr/local/etc/lighttpd/modules.conf:

include “conf.d/fastcgi.conf”

Next, enable the lighttpd FastCGI module to point to php-cgi binary, edit the file /usr/local/etc/lighttpd/conf.d/fastcgi.conf, uncomment the block starting from “fastcgi.server =”, also change the value of “bin-path” directive as we will be making changes related to the value here.

fastcgi.server = ( “.php” =>
( “php-local” =>
(
“socket” => socket_dir + “/php-fastcgi-1.socket”,
“bin-path” => server_root + “/bin/php-cgi”,
“max-procs” => 1,
“broken-scriptfilename” => “enable”,
)
),
( “php-tcp” =>
(
“host” => “127.0.0.1”,
“port” => 9999,
“check-local” => “disable”,
“broken-scriptfilename” => “enable”,
)
),

( “php-num-procs” =>
(
“socket” => socket_dir + “/php-fastcgi-2.socket”,
“bin-path” => server_root + “/bin/php-cgi”,
“bin-environment” => (
“PHP_FCGI_CHILDREN” => “16”,
“PHP_FCGI_MAX_REQUESTS” => “10000”,
),
“max-procs” => 5,
“broken-scriptfilename” => “enable”,
)
),
)

If you have not changed the value of “bin-path”  like above or according to the value of “var.server_root” (in /usr/local/etc/lighttpd/lighttpd.conf)  , you will see following errors during lighttpd startup in the file /var/log/lighttpd/error.log:

2016-10-20 19:35:13: (log.c.216) server started
2016-10-20 19:35:13: (mod_fastcgi.c.1133) the fastcgi-backend /usr/local/www/data/us
r/local/bin/php-cgi failed to start:
2016-10-20 19:35:13: (mod_fastcgi.c.1137) child exited with status 2 /usr/local/www/
data/usr/local/bin/php-cgi
2016-10-20 19:35:13: (mod_fastcgi.c.1140) If you’re trying to run your app as a Fast
CGI backend, make sure you’re using the FastCGI-enabled version.\nIf this is PHP on
Gentoo, add ‘fastcgi’ to the USE flags.

You see that the path the configuration takes is by appending the value to server_root value, which is wrong.

For my configuration to work I had to have set ‘var.server_root = “/usr/local” ‘.

Once the above config changes are done, untar the zerobin package in the document root, which is by default set to ‘/usr/local/www/data’, and change the owner and group to ‘www’.

chown -R www:www /usr/local/www/data

References:

https://box.matto.nl/freebsd10lighttpd.html

 

Install Redmine, Apache, MySQL, and the passenger module(rubygem-passenger).

# pkg install redmine apache24 mysql56-server mysql56-client rubygem-passenger

Things to note about locations where we will place files and edit them:

Installation directory of Redmine:

/usr/local/www/redmine

Redmine Config directory:

/usr/local/www/redmine/config

Apache virtualhost directory:

/usr/local/etc/apache24/Includes

Next start MySQL :

# service mysql-server onestart

Create the necessary DB, user for Redmine and grant privileges:

CREATE DATABASE redmine CHARACTER SET utf8;
CREATE USER 'redmine'@'localhost' IDENTIFIED BY 'my_password';
GRANT ALL PRIVILEGES ON redmine.* TO 'redmine'@'localhost';

In the above commands change the password, database name, and user name for your setup.

DB Data load:

Load DB dump data from taken from old Redmine instance to the new as root user:

# mysql -u REDMINE_USER -p < DB_DUMP_FILENAME_here.sql

You might need to add the line “USE REDMINE_DB_NAME;” to the .sql file, like for the above one “USE redmine;” to the top of the .sql dump file as the script might not have statement to select what DB to populate.

Redmine configuration:

Copy old database.yaml file and change adapter type to ‘mysql2’ from ‘mysql’, under config directory of Redmine.
Copy the old configuration.yaml file under config directory of Redmine.
Copy the attachments directory(named files) from old installation to new installation directory.

After above ran follow below guide to upgrade the DB schema, generate new session token, etc.
https://www.redmine.org/projects/redmine/wiki/RedmineUpgrade

Apache virtual hosts configuration:

I followed the message posted when the passenger module got installed.

Copy the following under a any file ending with extension .conf, like redmine.conf under Apache Includes directory:

#Redirect all http requests to https

<VirtualHost *:80>
        Redirect / https://52.70.124.168:443/   <= Replace with FQDN or the IP address of your server/service.
</VirtualHost>

#Enable server to listen on TCP port 443
Listen 443

<VirtualHost *:443>

        #Load SSL module and enable SSL using certificates
        LoadModule ssl_module libexec/apache24/mod_ssl.so
        SSLEngine on
        SSLCertificateFile "/usr/local/etc/apache24/FQDN_NAME.crt"
        SSLCertificateKeyFile "/usr/local/etc/apache24/FQDN_NAME.key"

        #Load Passenger module and point to Ruby and Gems
        LoadModule passenger_module /usr/local/lib/ruby/gems/2.2/gems/passenger-5.0.28/buildout/apache2/mod_passenger.s
o
        PassengerRoot /usr/local/lib/ruby/gems/2.2/gems/passenger-5.0.28
        PassengerRuby /usr/local/bin/ruby22

    # This is the passenger config
    RailsEnv production
    PassengerDefaultUser www
    DocumentRoot /usr/local/www/redmine/public/
    <Directory "/usr/local/www/redmine/public/">
        Allow from all
        Options -MultiViews
        Require all granted
    </Directory>
</VirtualHost>

Finally run the mysql_secure_installation script to disable remote root user login.
Start Apache process and add it and MySQL services in /etc/rc.conf file to start at boot time:

service apache24 onestart

sysrc mysql_enable="YES"
sysrc apache24_enable="YES

This will ensure that Redmine starts up during boot, when Apache and MySQL are running.

I faced an issue where the email notifications were not working, for this check the configuration.yaml file for issues with the Redmine wiki, in my case the file from previous installation had incorrect settings.

https://www.redmine.org/projects/redmine/wiki/EmailConfiguration