A Virtual Private Network (VPN) secures your own device from other devices on the same local network. This is done by encrypting all traffic which leaves your machine. After that, only a chosen trusted third party machine can decrypt the traffic again. This machine will usually be hosted by either a service provider or eg. the IT department of your company. In this post, I give you some thoughts about when to use a virtual private network and when you are probably better of not using one.
At work
The IT department of your employer might ask you to always use their own VPN solution when connecting with your work devices. They do so to ensure that devices in your own home network do not affect the company — eg. when you are working from home or are on travel.
At home
You might wonder if encrypted network traffic is something you need also for your private life.
If you are at home, I would advise to not use a VPN solution. Probably, this will only bring you disadvantages. For example, VPNs usually have a slower Internet connection, lower bandwidth, or even not functioning things like video streaming. Additionally, there is not much benefit for you. I assume here that you actually can and do trust your own local network at home. If that is not the case — for example your house is full of gadgets from shady market places — then, indeed, a VPN is a good thing to use. However, you should not use these gadgets in the first place.
The situation is different if you are on the go. In public places like train stations, airports, or your favorite coffee shop, you are essentially connecting to a local network which you do not have much control on. Be aware that almost anyone could connect to these networks — their physical presence presumed. Thus, they might be able to tamper with your network traffic, read it, modify it, or reroute it. In these cases, a VPN might save you a lot of worrying as it makes it impossible for locally connected people to harm you. They will only see encrypted bytes on the wire (or in the wifi) which is completely useless to them.
The recommendations differ based on who is setting which kind of trust in your local network. With the company VPN, it is your employer who is not trusting your local network at home. You on the go, instead, should not trust random networks you connect to. But at your own home, you are most likely better of not using a VPN.
Other reasons
Finally, there are further reasons which might require you getting a VPN. For example, it is a solution to circumvent geoblocking on video streaming sites when you are abroad. Maybe you have encountered this already: You are on vacation somewhere in the world and want to enjoy the next episode of your favorite season. However, your streaming service tells you it is only available in your home country. Here, a VPN may virtually transfer yourself back home — or even wherever you could want to be on the world — enabling you to enjoy the next episode.
If you are interested in getting your very own private VPN solution, contact us and we will be happy to help you out.
One first step when attackers try to tinker with your environment is often referred to as “enumeration”. As I am running multiple web services on my own servers, I am of course curious what all this open source software actually does. So let’s start with enumeration of your web servers. Naturally, I can deep dive into their source code and evaluate their functionality. However, this post shall show you how to find information on your web servers without looking at the code through enumeration.
Port Scanning
Let’s start with a remote test. Currently I am at work and could only scan my home network from outside. Let’s do this.
Remote enumeration of my home network.
Looking at my home network from outside, we see that there are several ports open. Let’s redo this detection and try to see wether nmap can get some more information on the services it listed. The -A flag tells nmap to try to detect the operating system, the versions of software used, to scan for (nmap-) known vulnerabilities and to perform a traceroute to estimate our server location. Also, this time I added the flag -T4 which basically tells nmap to use a faster template scheme. This speeds up the analysis but may be less successful in detecting everything.
Host is up (0.011s latency).
Not shown: 994 closed ports
PORT STATE SERVICE VERSION
23/tcp open ssh OpenSSH 8.3 (protocol 2.0)
53/tcp open domain?
443/tcp open ssl/http nginx 1.18.0
| http-robots.txt: 1 disallowed entry
|_/
|_http-server-header: nginx/1.18.0
|_http-title: Jitsi Meet
| ssl-cert: Subject: commonName=meet.yourhostname.or.ip
| Subject Alternative Name: DNS:meet.yourhostname.or.ip
| Not valid before: 2020-08-18T12:17:00
|_Not valid after: 2020-11-16T12:17:00
|_ssl-date: TLS randomness does not represent time
| tls-alpn:
| h2
|_ http/1.1
| tls-nextprotoneg:
| h2
|_ http/1.1
2000/tcp open tcpwrapped
5060/tcp open tcpwrapped
8008/tcp open http
| fingerprint-strings:
| FourOhFourRequest:
| HTTP/1.1 302 Found
| Location: https://:8015/nice%20ports%2C/Tri%6Eity.txt%2ebak
| Connection: close
| X-Frame-Options: SAMEORIGIN
| X-XSS-Protection: 1; mode=block
| X-Content-Type-Options: nosniff
| Content-Security-Policy: frame-ancestors
| GenericLines, HTTPOptions, RTSPRequest, SIPOptions:
| HTTP/1.1 302 Found
| Location: https://:8015
| Connection: close
| X-Frame-Options: SAMEORIGIN
| X-XSS-Protection: 1; mode=block
| X-Content-Type-Options: nosniff
| Content-Security-Policy: frame-ancestors
| GetRequest:
| HTTP/1.1 302 Found
| Location: https://:8015/
| Connection: close
| X-Frame-Options: SAMEORIGIN
| X-XSS-Protection: 1; mode=block
| X-Content-Type-Options: nosniff
|_ Content-Security-Policy: frame-ancestors
|_http-title: Did not follow redirect to https://yourhostname.or.ip:8015/
|_https-redirect: ERROR: Script execution failed (use -d to debug)
This time some things changed. While in the simple portscan in the beginning, nmap guessed port 23 would be hosting a telnet service, this time it shows us, there is a OpenSSH server with version 8.3 although you would normally expect it on port 22. Second, port 53 seems to host a dns service. Also, nmap detected that port 443 is actually hosting a webserver running TLS and also that there is Jitsi Meet running on it. For ports 2000 and 5060, we still do not know what’s that. Finally, port 8008 has another webserver on it, however we do not have further information on it.
Banner Grabbing
In fact, all of this is correct. But how did nmap get information on the changed port of ssh? The answer here is banner grabbing. That’s the same process that also shodan.io uses. In short, you open a TCP socket to the port and start receiving bytes from it. I have quickly scripted this in python to demonstrate.
If I run this against the skewed service, we get the same information also nmap got.
Banner grabbing on one services.
So, the remaining question is what changes if we have local access to a server and want to check for available services. Long answer short, not much. I scanned my machine with nmap localhost -A also locally and found some more services which are only published locally or in the internal network. I hope this enables you to get started with nmap and trying to find out, what’s actually on your network. By the way you can also scan a whole network with the CIDR notation like
nmap 192.168.178.1/24
Before your start with happy network scanning, please keep in mind that you should only do this on your own machines and networks.
In this short tutorial, we will see how to use radare2 to reverse engineer the EasyPass challenge from HTB. Let’s start by looking what the program does.
Apparently, the program just wants a password. With my trial of “test” I just got a message with the text “Wrong password”.
So, let’s load up the binary in r2.
r2 EasyPass.exe
Now, we are ready to start our analysis using reverse engineering. First, let radare2 do the difficult job of analyzing the program structure and discovering functions.
peter@terra> r2 EasyPass.exe
-- Your problems are solved in an abandoned branch somewhere
[0x00454450]> aav
[x] Finding xrefs in noncode section with anal.in=io.maps
[x] Analyze value pointers (aav)
[x] Value from 0x00454600 to 0x00455000 (aav)
[x] 0x00454600-0x00455000 in 0x454600-0x455000 (aav)
[x] 0x00454600-0x00455000 in 0x401000-0x454600 (aav)
[x] Value from 0x00401000 to 0x00454600 (aav)
[x] 0x00401000-0x00454600 in 0x454600-0x455000 (aav)
[x] 0x00401000-0x00454600 in 0x401000-0x454600 (aav)
[0x00454450]> s entry0
Finding the right place
Well, we are now at the entry point of the program. However, this program will at startup only construct the window and draw it to the display. That’s something we are not really interested in. We are looking for the password check. So, let’s try to find this section by searching for strings in the binary which contain the string Password.
As „Wrong Password” was the text on the message we got with our wrong trial, this might be a good starting point.
[0x00454450]> s 0x454200
Let’s switch into the visual mode V and have a look what we find at this position. You can use p or P to cycle between different views.
If we scroll up a bit, we can see in the hexdump another string. “Good job.“ sounds exactly like what we are looking for.
Hence, we search for “Good” and note its location. Now, we have already seen that we are actually in a data section of the binary which only contains string, no code. Hence, we must find the point of code which references this string.
With axt (A cross(X)-reference To) we find this location. Let’s again jump into the visual mode and have a look at the disassembly.
Disassembly at the interesting location.
Understanding the challenge
There are two times calls to fcn.00427a30. The second one in the last row and the first row some rows before. In both cases immediately before these function calls, an address to a string is written to eax. radare2 was able to identify the last string (“Wrong password”), while we also already know the address for the first call. It’s the address we found for the “Good job“ string. So, the interesting point is now to understand when is which part called. Right before the first mov+call sequence, there is a comparison jne which would skip the “Good job“ string. You can see this by the drawing radare2 made for us or by having a look on the jump address. Scrolling just a bit further up, we see a series of calls to the same function (at 0x4042b4) and the xref lookup in the beginning (aav) also dereferenced some characters for us. Do we already have our password?
Finding xrefs is a very helpful trick in understanding this crackme!
This post shall give you an overview of methods to assess the cybersecurity risks of your products or services. We will go through the development of risk analysis and try to understand why every part is there and what you need it for. Before we start, please be aware, this will not give you a one fits all approach or will cover the complete research from this field. Rather the goal is to make you understand what it takes to assess your IT security risks as well as prepare you to conduct basic risk assessments in the fastest and easiest way possible. So let’s directly jump in.
In the following Sections, I introduce and give examples for different approaches on how to get an overview of possible IT security threats to your systems.
Directly assessing risks
Throughout this post we will use and come back to one very specific use case. This will serve us as an example to better understand the single steps.
A developer uses his own laptop running the Ubuntu operating system to develop a web application for a company. For this, he uses Visual Studio Code as a development environment and a mariadb database (SQL database) as part of his backend. The web application itself uses HTML, CSS, and JS on the frontend and PHP on the backend. For testing and the later deployment he uses an apache web server. Finally, the developer already knows about IT security and employs only secured connections using TLS1.2 and server-side certificates.
Web development use case
For a direct risk assessment, we could now try and start to come up with possible problems. Just by looking at the example description several problems come to mind:
A public exposure of the database may allow for unallowed data access.
A misconfigured web server may leak (confidential) data.
If the web application gets input from an end user to build up database queries, there is a risk of SQL injections.
etc.
After establishing that list, we can now rate the associated risk with each threat, e.g. on a scale from low to high.
This approach has a quite obvious problem. Because you can only come up with an extensive list of possible risks if you bring a lot of experience. Therefore, the risk analyst must have already a feeling on what might go wrong given this setup. This method, hence, can only be useful as a first guess in a very brief discussion of the overall system. For other settings, a more methodological approach is needed.
Asset-based assessment
As first step, many risk assessment methods consider the establishment of a list of assets which are worth protecting. In our example from above such a list could be:
the developed code of the web application (may be intellectual property)
the private keys of the web server (during development and production use)
data stored in the database
data stored on the web server (machine)
STRIDE
Given this list, we could now think about possible threats to these assets. The probably most common tool to do so is STRIDE developed by Microsoft. STRIDE stands for
Spoofing
Tampering
Repudiation
Information Disclosure
Denial of Service
Elevation of Privilege
With this information, we can start out our first basic method. Let’s put the assets in a formatted table.
ID
Name
Description
A1
Code
Developed code of the web application
A2
DKey
Private key of the web server (development)
A3
PKey
Private key of the web server (production)
A4
DB
Data stored in the database
A5
WS
Data stored on the web server (machine)
Assets in our example.
Threats
For each of these assets, we can now evaluate whether the STRIDE threats apply. Let’s do this.
Asset
Threat
Description
Realistic?
A1
S
Spoofing of code
How would you spoof this? — not really…
A1
T
Tampering with code
Could be possible with a manipulated compiler, operating system or development IDE.
A1
R
Repudiation of the code
As the code in our example is not signed or anything comparable by the developer — yes.
A1
I
Information disclosure of the code
A manipulated IDE might upload (parts of) the code to untrusted repositories.
A1
D
Denial of service of the code
This would assume that there is no backup of the development code. With current versioning systems most of the times being used, this is rather unlikely.
A1
E
Elevation of privilege of the code
Well, that does not make sense.
A2
S
Spoofing of the DKey
You cannot spoof the private key. It’s unique and a different key would not match the server certificate.
A2
T
Tampering with DKey
A manipulated private key would not work.
A2
R
Repudiation of DKey
Makes no sense.
A2
I
Information Disclosure of DKey
Yes, then all development tests could potentially run against a wrong test server.
A2
D
Denial of Service of DKey
This might only delay testing of the web application. Not dramatic.
A2
E
Elevation of Privilege of DKey
Makes no sense.
Threats to the system.
With this list of threats we can now once again rate the risk associated with each threat qualitatively on a scale. Like in the previous example, we still need to assess this with our experts’ knowledge.
Cumbersome threat lists
Well, ok. I cheated a bit. After only two assets I got exhausted with writing up the threats. When reading through them, you might have noticed a lot of them actually do not really make sense. This is because some of the threats, like Elevation of Privilege of a private key, simply do not make sense. So, if we want to get through this faster we need a better filtering on where to apply which of these threats. I must notice, a lot of people also use violated security goals instead of STRIDE. Then, we would have
violation of confidentiality (information disclosure)
violation of integrity (tampering)
and violation of availability (denial of service)
Meaningful threats
While the list of possible threats is then reduced to half the size, for larger systems this still does not solve the problem of a long list to go through. In order to filter this list even more let’s think about when the different proposed threats are actually relevant. Spoofing happens during data transmission. Either the sender or the receiver could be spoofed. Tampering applies to data in transit, data stored at locations, as well as to functions or processes. Here, a function or a process shall be the logical unit which utilizes and possibly transforms data at any location. Repudiation can happen if data has changed or was transmitted. In these two cases, also information leakage may occur. Denial of Service applies to data transmission and functions or processes. Finally, elevation of privileges does not apply to the system under evaluation but rather to the attacker. It describes an entity carrying out an activity which it was not authorized to. This either requires tampering or spoofing with the system or an authorized user-action combination which becomes unauthorized under certain not modeled constraints.
Goal-based assessments
Before we continue with the next method I want to mention another approach to identify threats to your system which is quite similar to the previous asset-based assessment. Attack trees are an already long-known method to analyze the steps an attacker must perform to reach a goal. In our example we can define different goals of an attacker, e.g. implementing a backdoor in the web application. So let’s think about how he can achieve this.
Part of an attack tree.
Ok… I admit. Once again I cheated and do not provide the full attack tree but only an excerpt so that you get the idea. Except for the topmost branches from the and node all edges indicate alternatives. To implement the backdoor the attacker needs to get access to the code and have some backdoor code available. All other possible steps have several alternatives to reach these to sub-goals. Compared to the asset-based assessment method, this approach yields a much more fine-grained analysis of the system and potentially shows you already the easiest way for an attacker to break your system. However, it answers a slightly different question. An attack tree explains how an attacker might reach his goal but not what his goal might be. For the identification of possible goals an asset-based assessment is way more suitable. But attack trees are then a perfect companion for your asset-based assessment if you want to better understand how your assets might be attacked, which controls you may introduce and how hard an attack actually is.
Model-based assessments
So, in the previous sections we explored that for a better mapping of threats we can rely on basic assumptions about the STRIDE threats and refine our system model to data exchanges, stored data, senders, receivers, and functions and processes.
With these refinements, our model now looks a bit more detailed.
ID
Entity
Description
E1
VS Code
Development Environment
E2
Webserver
Apache
E3
DB
MariaDB
E4
User
End-user of web application
E5
Developer
Developer of the web application
Entities involved in the system.
ID
Data
Description
Stored at
D1
DKey
Private key of the web server (development)
E1
D2
PKey
Private key of the web server (production)
E2
D3
DB
Data stored in the database
E3
D4
WS
Data stored on the webserver
E2
D5
Code
Developed code of the web application
E1, E2
D6
Website
Website evolving from code and DB content
D7
Input
User input to website
E2
Data stored at different locations in the system.
ID
Data Flow
Sender
Receiver
Data
F1
Developer → VS Code: Code
E5
E1
D5
F2
VS Code → Webserver: Code
E1
E2
D5
F3
Webserver → User: Website
E2
E4
D6
F4
User → Webserver: Input
E4
E2
D7
F5
Webserver → DB: Input
E2
E3
D7
Data flows between entities of the system.
ID
Description
Location
Input
Result
G1
Private key generation (development)
E1
D1
G2
Private key generation (production)
E2
D2
G3
Website generation
E2
D3, D5
D6
G4
User input ingestion
E3
D7
D3
G5
Development
E5
D5
Functions and processes generating or manipulating data in the system.
Advantages
While writing up these tables does not take any effort away compared to the asset-based method, it will serve us quite a bit when rating the risks. First of all, we get a way more detailed picture of the system under evaluation and effectively enable others to understand our reasoning about whether and where one of the STRIDE threats applies. Second, we eliminate many of the previous meaningless threats which did not make sense. And most important, our threats automatically become more granular what makes them easier to asses. If you think about it: it does make a difference — concerning a threat’s likelihood — whether you try to manipulate (tampering) the developed web application code (D5) while it is at rest on VS code (E1) compared to when it is transferred to the web server (F2).
Disadvantages
On the other hand, this model-based assessment requires some efforts in advance to construct the needed system model. However, from my experience I strongly suggest you go this route. Often it is this very step which already shows developers and stakeholders their weaknesses. Especially in big companies it is more often the system complexity which introduces IT security threats instead of missing knowledge.
Until now, we covered a lot of state-of-the-art methods. In the next part of this new series we will touch upon more technical details of the approaches and develop a template for assessments.
I realized that Netflix is not working as they rigorously ban the use of VPNs or proxies. Hence, it does work when using my home server as gateway/router but not when I additionaly enable the VPN.
First, I tried to enable or disable the VPN when I see the corresponding DNS queries for either amazon or netflix services. However, it turned out that my TV communicates with each other service regardless of what I am currently using. Therefore it is not possible to identify the desired state.
To ease the trouble at least a bit, I wrote a small webserver which allows me to make the switch with the click on a bookmark.
Flask Server
The following code snippet shows a default setup for a Flask server in python. It listens on the defined IP address and port.
#!/usr/bin/env python3
from flask import Flask
import os
app = Flask(__name__)
if __name__ == '__main__':
app.run(host="192.168.42.19", port=5000)
Handling the VPN
To handle the VPN switching I just remove or add the ip rules to forward the traffic either to my wireguard table or using the default one. A very hacky check routine identifies the current routing state.
def enableVPNRoute():
print("enable VPN")
os.system("ip rule add iif enp3s0 lookup 51820")
os.system("ip rule flush")
pass
def disableVPNRoute():
print("disable VPN")
os.system("ip rule del iif enp3s0 lookup 51820")
pass
def isVPNOn():
ir = os.popen("ip rule").read()
return "iif" in ir
Establishing the Routes
Finally, Flask introduces the app.route directives which I now use to enable the VPN if the webserver is called as http://192.168.42.19:5000/amazon or disabled as http://192.168.42.19:5000/netflix.
@app.route('/amazon')
def amazon():
if not isVPNOn():
enableVPNRoute()
return "Turned VPN on. Enjoy Amazon :-)"
@app.route('/netflix')
def netflix():
if isVPNOn():
disableVPNRoute()
return "Turned VPN off. Enjoy Netflix :-)"
As a scientist my usual workday can be divided in three, sometimes 4 essential steps:
Reading a lot of information
Looking out of the window ⇒ that is, thinking
Writing down new results
And finally sometimes, implementing proof-of-concepts
As is evident from this list, at least one third up to half of my work, therefore, is made up by writing either reports or code. To improve the efficiency of this task, I started thinking about our standard keyboard layout. QWERTY (or QWERTZ for the Germans) was invented to prevent neighbouring letter arms in your typewriter from getting tangled up with each other. You may notice that optimizing for this goal may not result in an optimal setup for typing. But, stop, let us think about that. We do not actually use typewriters anymore. And our main goal for optimization, thus, changed. Hence, there were several attempts of enhancing typing efficiency in different languages. With the beginning of 2017, I started to use the German layout variant neo. It is similar but not equal to the more general DVORAK.
The idea of these layouts is to reorganize the letters in a way, that the most common and often used keys lie closer to each other and reside on the home row. The home row is the one your fingers will rest on anyway, when using proper 10-finger typing. By adopting to such a different keyboard layout, you, thus, gain the advantage of having to move your fingers much less. Some people who converted to these layouts claim that it helps them remove strain on hands and fingers. Others suggest that by having your fingers move less they are able to type faster.
1st layer of neo layout
Looking at the first layer of keys in the neo layout in the image above, you see all the vowels residing on the left side of the home row. In German, there only rarely are two vowels following each other. Hence, it makes sense putting them closer together. After practicing this layout now for one and half a year let me put down the pros and cons.
Pro
Con
– less movement in your hands – starting fresh may eliminate bad habits – maybe increased typing speed – shoulder surfing for your passwords is way harder
– you need to maintain some proficiency in qwerty/z – high effort until you reach the same typing level again – you probably need to configure the machine you are working on first
Finally, for me the benefits definitely outweigh the drawbacks. I typically only work on my own machines where I have full control over the keyboard layout used. While it took me only two to three months to learn the layout, I needed an additional year to reach my previous typing speed. However, when you start out fresh you get the chance to learn typewriting from scratch allowing to eliminate all your bad habits on the fly. This was a big deal for me as the new layout entirely forces you to type blindly.
After 1.5 years, I still enjoy the calamity when typing prose as it now feels much more like almost flowing out of my hands. For me, this really is a boost in typing real estate — if there is such a thing.
Layer 4
The real benefit of neo comes for me with the fourth layer. These are the symbols you type in combination with the right ALT key or the one next to the left SHIFT key. With this layer you get access to movements with the arrow keys, page up/down and home and end keys. Especially when working on mobile computers which often lack these keys, this makes editing longer texts way more efficient. For all those experimental scientists the availability of a virtual numpad in this layer is also great. However, I only use it if I really have a lot of numbers to enter. For the occassional entry of a few numbers I still prefer the normal number row.
Layer 3
On the third layer, you find most of the symbols you need for programming as well as special things like @ and different hyphens. While it might seem cumbersome to have things like / or { only accessible with pressing two keys at once, it is fine for me as the needed movement of the fingers is small enough. In addition, it is great having access to a dash -, the n-dash –, and the m-dash —. If you do not know the difference, I encourage you to look it up. Good texts shall have the right symbols.
After moving to a different country and having some days of vacations left, one of my concerns was to get the whole home IT setup working again. Precisely, for me this involves setting up my server again, building the network infrastructure and wireless access points as well as ensuring all my everyday services are up and running. One thing I came across when doing this, is the hassle of geoblocked streaming services which I still do want to use. So while my Netflix subscription still works out-of-the-box on my TV, the Amazon Prime Video service is not working anymore. Hence, in this article I will show you how circumventing the amazon geoblocking features is actually possible with only open-source tools.
I can still sign in to my Amazon account and see all the movies and shows. However, nothing is available to be watched in my country, i.e., in Denmark. After a while I realized, I am not looking at my German Amazon interface but the UK one. No surprise, I can login to the UK Prime Video service. But as my Prime subscription runs on the German branch, I do not have access to the UK movies and shows.
What does the TV do?
So, I had a look at the network traffic of my TV. I noticed that the communication with Amazon always started with a call to the website atv-ext-eu.amazon.com. First, I thought this website uses some kind of geolocating my IP address and then determining to which branch I shall be redirected. During transmission of this address, I thought I can intercept the response and alter it to the German branch. Then, I would have my TV once again talking to the German Prime Video. However, there was no response and apparently the forwarding happens on the server-side.
What then?
Normally, people use proxies or VPN services circumventing these amazon geoblocking problems. Unfortunately, my TV neither comes with a proxy or even a VPN configuration possibility. One solution, thus, would be to configure my router to route every traffic via a German VPN. But, then I would always go through this bottleneck, even for services which do not need it. On the one hand my router does not allow for such a configuration, on the other hand, this is not a viable solution for me. Hence, I built my very own approach to solve this problem.
The setup
I have a small server in my network for different services anyway. The idea is simple: intercept all traffic from my TV and route it through a VPN endpoint in Germany.
Using an ArchLinux server as a Router
First step is to configure the IP address of my home network server as the default gateway of my TV. Then, we need to make sure that the server actually forwards IP packets destined for other machines.
sysctl -w net.ipv4.ip_forward=1
VPN connection
For the VPN connection, I ordered a small virtual private server in Frankfurt, Germany, with unlimited traffic and a decent enough bandwidth. This server and the server in my home network are configured to establish a wireguard connection. While the main setup is as usual I took some modifications. The general setup for a VPN server is well explained on the ArchLinux wiki: https://wiki.archlinux.org/index.php/WireGuard
In contrast to the default wireguard VPN setups, I added the post up and down directives to ensure the VPN server actually performs network address translation. Obviously, you need to have iptables up and running for this. A second important point is, that we need to include all IP addresses in AllowedIPs which may redirect traffic through this VPN. As I have different subnets for the VPN itself and my actual home network, I have the wireguard address of my home server (10.42.0.2) and the local IP address of my TV (192.168.42.180).
Afterwards, we can just start the network interface with wg-quick up wg0.
As I wrote in the beginning, I use the server in my home network for several other services which are in parts also publicly available. These services I do not want to route through the VPN but through my original internet connection as I do have a much bigger bandwidth there.
With iptables, wireguard uses a second routing table to separate its rules from your normal routing. On my system, this second table got named 51820. Wireguard then just creates one rule which captures all traffic not marked with 51820 to go through the new second routing table. At the same time, wireguard itself is able to mark all traffic coming from the remote endpoint with that mark. Hence, it is routed using your default table. To prevent routing every traffic through the VPN, we exchange this mark filter for a better suited one. After starting the wireguard interface with wg-quick up wg0, I remove the general routing rule which enforces all traffic to go through the VPN. Then, we replace it with a rule to only use the second routing table if we see packets coming to us on the plain network interface, i.e., enp3s0 on my system. This is done using the iif (incoming interface) rule.
ip rule del not fwmark 51820 lookup 51820
ip rule add iif enp3s0 lookup 51820
Therefore, all traffic originating at the machine itself will be routed using the default table while only packets which arrive on enp3s0 and are to be forwarded will use the second table.
Finally, now I can have every device in my home network choosing between two different gateways, the Danish one at .1 and the German one via the wireguard VPN on .19. Everything needed is just the change of the gateway on the corresponding device and appending the IP address on the VPN server to the AllowedIPs in the wireguard configuration.
Only Drawback
For now, everything is working fine: Prime Video and YouTube play well over this setup. But, for some unknown reason, I cannot connect to Netflix with this setup. The debugging showed it tries to reach three different servers while only two of these connections are successful. I still need to figure out what is wrong here…
Choosing a suitable editor can be a hard problem. At least it is for me. For several years now, I am constantly switching back and forth between vim, Atom.io, and Visual Studio Code. Lastly, I really enjoyed using Atom due to its nice looking interface. However, on my system using my set of plugins, VScode was way faster. Realizing this, I remembered there is another editor many people prefer which I never had a closer look at. So, in this post I will dive into how I setup emacs to suit my writing needs for academia and coding needs for developers.
For now, let us start with the set of plugins I use, why I use them and how I configured them. I am pretty sure there might be better options for some tasks. If you know one, give me a hint in the comments section so that I can have a look at it.
This list only contains my absolute highlights in my config. There are some more plugins I use which you will notice when we have a closer look at the config itself. Following, however, I just want to note some of the commands I frequently use inside these plugins to manage my day-to-day tasks. Each plugin has detailed descriptions on how to use it, so check the links provided.
Magit
Important commands I use quite often:
commit: c c
push: P p
pull: F u
status/refresh: g
diff: d d
quit: q
Projectile
Important commands I use quite often:
list projects: p
grep in all files of project: p s g
Configuration
Now, the most interesting part is probably the setup in emacs itself. Together, let’s go through my .emacs file and see, what’s there and why. I am afraid I do not comment every line but will try the best to give an impression on what is done and why.
The block above ensures we can load plugins (l. 1, 3-5), the history of commands used is stored across sessions (l. 2). Finally, we load our favorite color/syntax theme atom-one-light.
Next, we setup the window to start cleanly (l. 2-5) and initialize the package management (l. 1, 6-12).
(use-package evil
:ensure t
:init
(setq evil-want-keybinding nil)
:config
(evil-mode t)
(modify-syntax-entry ?_ "w")
(use-package evil-commentary
:ensure t
:config
(evil-commentary-mode))
(use-package evil-leader
:ensure t
:config
(global-evil-leader-mode)
(evil-leader/set-leader ",")
(evil-leader/set-key
"m" 'helm-M-x))
(use-package evil-collection
:ensure t
:config
(evil-collection-init 'outline)))
The above code initializes the evil plugin as well as some additions to that. Note the special directives of the use-package plugin. The :ensure t clause installs the necessary plugin if it is not present. This enables the reuse of the .emacs config on a different machine as long as use-package is installed there. Commands in :init are executed before the packet is actually loaded, commands in :config right after loading. In the case of evil, I define the word text object to include underscores like it does in vim. Additionally, I define , to be my leader key and map the helm plugin to ,m.
(use-package magit
:ensure t
:config
(evil-leader/set-key
"g" 'magit))
We use darkroom and the markdown mode. The :mode directive binds the markdown-mode to the provided file extensions. Additionally, we modify the default compile command to our needs, i. e. pandoc --natbib (see also Academic Writing using Pandoc). As I like graphing with graphviz, we install a graphviz plugin. Sublimity then provides us with smoother scrolling. However, this is not working very well for me. I am still looking for better options.
(use-package powerline :ensure t :config (powerline-center-evil-theme))
Developing
We use the powerline module with the evil theme to not get lost in emacs’ mode universe.
We use projectile for project management and map it to ,p. Additionally, we activate helm and remap the default M-x command to helm-M-x. This is done, to ensure we always run helm with its awesome fuzzy search. Finally, we activate helm’s projectile plugin.
For python development, we install flycheck, a syntax checker with python support and enable it globally. As primary python development mode, I prefer elpy where we need to activate the corresponding flycheck mode and remap one of my favorite commands, i. e. elpy-goto-definition as M-g.
As I am also doing web development for some projects, I included configuration for the web-mode as well as a php-mode. The awesome emmet plugin is also great. However, I do not have much experience with any of them so don’t blame me for errors in that part of the config.
Finally, we setup neotree to use all-the-icons, map it to ,t and bind it to the projectile project management. This adds custom behavior to open the neotree view always we switch to a different project. I think that is quite handy to get a quick overview, where you just moved to.
If you are a developer like me, you probably prefer working on a Linux based system. However, business sometimes requires you to interact with the well-known and omnipresent Microsoft Office solutions. While one solution to this problem is using the tools with wine, I recently started to power up my Windows VM in Virtualbox. Some things, like SharePoint integration, just require this setup. However, often I still create my initial drafts in pandoc and only later export them to the MS Office documents formats. To ease my workflow between MS Office and other software I normally use outside this VM, i.e. on my Linux host, I created some small desktop files allowing a tight integration from your virtual MS Office products into your Linux desktop. In this post I show you how opening your office documents inside a VM directly from your Linux file manager is possible.
Word documents
A file named ~/.local/share/applications/Word\ Win7-VM.desktop containing the following allows you to choose Word Win7-VM as your default application for the specified document.
[Desktop Entry]
Name=Word Win7-VM
Exec=vboxmanage guestcontrol Win7 --password "YourWindowsPassword" run "C:\Program Files (x86)\Microsoft Office\Office16\WINWORD.EXE" X:%f
Type=Application
StartupNotify=true
Comment=Create beautiful documents, easily work with others, and enjoy the read.
You need to setup your VM such that the network device X: in Windows points to your Linux file system root. In Virtualbox you can easily achieve this with a shared folder. The read-only flag might be set to avoid unintended changes on your root.
After that, you can set the default application for word documents in Linux as
Markdown became one of the most powerful tools in my daily business of doing research. Nowadays, everyone faces a lot of writing work. While LaTeX which I learned during my studies at universities is a great deal for creating beautiful and reproducible documents without all the Microsoft Word hassles, it has some major drawbacks. Some people claim, it is way too hard to learn and not as understandable as WYSIWYG (what you see is what you get) editors, others – like me – are just annoyed by constantly having to type one of \ { }. If you are interested more into LaTeX, I suggest reaching out for your favorite search engine which will have plenty of ressources for reference. For me, markdown appeared as a perfect trade-off between these two worlds. Hence, I show you how you can also benefit from using pandoc for academic writing tasks.
Primer on Markdown
When talking (or writing) about markdown, I need to clarify that I will discuss here the pandoc markdown dialect. To my knowledge it is the most powerful dialect and conversion tool. Just have a look on the awesome conversion possibilities indicated on their homepage.
pandoc conversion possibilities
If you have not worked with markdown yet, let me shortly introduce to some of the basics. This will not be a complete tutorial to pandoc and there is way more functionality than I could describe in just one article. If you are interested have a look at the manual or leave me a comment what you need to know or need help with.
Basic Text / Paragraphs
Text is just plain text. There is nothing special about it. Just write it. That is the actual beauty of markdown. It allows you to focus on the most essential part of your writing. If you happen to need to indicate some formatting like italics, bold print or strikethrough text you have simple commands to your hand.
*bold* renders to bold, **italic** to italic, and ~~strike~~ to strike
Headings
A heading in markdown is indicated by a hash sign # followed by the name of the heading. Different levels of headings just use the corresponding amount of hash signs. Normally, headlines will be numbered. Unnumbered headings can be added by appending {-} to the end of the heading – this is actually short for {.unnumbered}. Additionally, if you are going to output to latex or pdf files, starting with heading level 4 you have access to paragraphs.
# Headline 1 {.unnumbered}
# Also unnumbered headline {-}
## Headline Level 2
#### Paragraph in Latex
These are generally unnumbered
Tables in pandoc markdown can easily be written by separating the columns using pipes |. The first line will automatically be converted to the table heading. You indicate the formatting of the table, that is whether you want to have your text in the columns being justified left, right or centered, in the second line. :--- thereby indicates left justified, ---: correspondingly right justified and :---: centered. Be aware, that the relative amount of dashes indicates how wide the column shall be and that you always need at least three dashes! Thus, :---|------: means we have two columns. The first is left justified, while the second is right justified. Additionally, the second column should take up twice as much width as the first. A caption can be added to the table by a newline beneath the table beginning with Table: caption goes here.
In my experience, these formatting instructions are really easy and very useful when creating simple documents. For scientific work, however, I most of the times rely on the inline latex feature which pandoc provides. This allows you to write arbitrary latex code right inside the markdown file at any place. During the document conversion, pandoc will just skip this part and copy it as it is to the final latex document. Just keep in mind that you will loose these parts if you do not export to latex in the end.
# Pandoc style tables
head col 1 | head col 2
:---:|:---
centered col | left aligned col
Table: Caption for the table (and yes, it gets converted to real captions ;
-))
# Latex style tables
\begin{table}
\begin{tabular}...
\end{tabular}
\end{table}
Does also work.
Mathematics
If you know LaTeX, you probably enjoy typesetting equations in latex. It is way more easier than it is in WYSIWIG editors. Pandoc markdown allows to use the same notations as LaTeX. So you can just write your equations as you are used to.
$$a = \sum_{\forall a_i \in A} a_i^2$$
One of the reasons, why I love pandoc markdown so much is, that the above code which renders a beautiful latex formula can be converted to a valid MS Word equation. It will not convert to an image and include that, but instead creates the according equation object. So far, this feature saved me several features and sometimes I just start a document for one equation, convert it and copy it over to some document I am working on. Just amazing – thanks to the developers!
Bibliography
For academic writing, you definitely need to know how to reference to previous work using pandoc. In markdown this is as easy as it might get. If you are already used to the latex style of using \citet{} or \textcite and \citep or \cite, you will enjoy how easy citing can be. First of all each markdown document can have a preamble like a latex document providing some metadata. This preamble starts using three dashes and ends the same way. A bibliography file can be provided as shown below.
---
bibliography: bibfile.bib
---
# Heading 1
Actual citing is now as easy as referencing the corresponding bibtex key. What I really like is, that this concept even works if you finally output to MS Word documents. It will give you correct citations from your bibtex bibliography.
As is shown by @Nohl2014 (for in text citing \citet)
The BadUSB publication [@Nohl2014] (for in parentheses citing \citep)
Multiple authors can also be cited [@Nohl2014; @Langner2013].
Sharing your final work
One of the benefits of writing in markdown is the possibility to export your final document to every format you may require. For example, you can generate a pdf file using the options -t pdf or by exporting to a latex source -t tex and doing the final document creation on your own. The full command for exporting to even a word document would then look like below.
pandoc document.md -t docx -o document.docx
For academic writing, however, I prefer to export the pandoc document to a latex document which I then include in the required latex template. Thus, I export to an intermediate file paper.tex and then use \include{paper.tex} inside the main document, e.g. sample.tex. Additionally, experience has shown that pandoc sometimes outputs some commands my latex templates do not recognize. These are usually concerned with tables. I, therefore, replace these commands with my table styles. Placing the full pipeline a file called Makefile in the same folder then allows for using make on the commandline for producing the final output.
pandoc paper.md -t latex -o paper.tex --bibliography bibfile.bib --natbib
--top-level-division=section --toc
# Convert longtable to supertabular
# Requires \usepackage{supertabular}
sed -i -e 's/longtable/supertabular/g' paper.tex
# Remove weird endhead and endfirsthead
sed -i -e 's/\\endhead//g' paper.tex
sed -i -e 's/\\endfirsthead//g' paper.tex
# Compile tex document.
# sample.tex contains the preamble, style and command definitions and has
a \include{paper.tex} for the actual content
xelatex sample.tex
bibtex sample
# Recompile for bib and toc updates
Producing Slideshows
As shown in the previous section, you can produce beautiful documents by just writing markdown. The magic of pandoc does not stop by just producing documents. You can even create awesome slideshows with it. Just follow the same principles as before and make sure you have headings from all levels 1 through 3. Each level 3 heading designates a new slide. By using the -t beamer option, you can then render awesome pdf slideshows out of latex code.
pandoc -t beamer --listings tool.md > input.tex
# Compile presentation.tex which holds preamble, style and command
definitions and has a \include{input.tex} for the actual content
pdflatex presentation.tex
Finally, also make sure you use a decent editor for this workflow. If you need an inspiration on how to use emacs for pandoc/markdown editing have a look in my setup for emacs.