# Category: Allgemein

A Virtual Private Network (VPN) secures your own device from other devices on the same local network.
This is done by encrypting all traffic which leaves your machine. After that, only a chosen trusted third party machine can decrypt the traffic again.
This machine will usually be hosted by either a service provider or eg. the IT department of your company.
In this post, I give you some thoughts about when to use a virtual private network and when you are probably better of not using one.

## At work

The IT department of your employer might ask you to always use their own VPN solution when connecting with your work devices. They do so to ensure that devices in your own home network do not affect the company — eg. when you are working from home or are on travel.

## At home

You might wonder if encrypted network traffic is something you need also for your private life.

If you are at home, I would advise to not use a VPN solution. Probably, this will only bring you disadvantages. For example, VPNs usually have a slower Internet connection, lower bandwidth, or even not functioning things like video streaming. Additionally, there is not much benefit for you. I assume here that you actually can and do trust your own local network at home. If that is not the case — for example your house is full of gadgets from shady market places — then, indeed, a VPN is a good thing to use. However, you should not use these gadgets in the first place.

The situation is different if you are on the go. In public places like train stations, airports, or your favorite coffee shop, you are essentially connecting to a local network which you do not have much control on. Be aware that almost anyone could connect to these networks — their physical presence presumed. Thus, they might be able to tamper with your network traffic, read it, modify it, or reroute it. In these cases, a VPN might save you a lot of worrying as it makes it impossible for locally connected people to harm you. They will only see encrypted bytes on the wire (or in the wifi) which is completely useless to them.

The recommendations differ based on who is setting which kind of trust in your local network. With the company VPN, it is your employer who is not trusting your local network at home. You on the go, instead, should not trust random networks you connect to. But at your own home, you are most likely better of not using a VPN.

## Other reasons

Finally, there are further reasons which might require you getting a VPN. For example, it is a solution to circumvent geoblocking on video streaming sites when you are abroad. Maybe you have encountered this already: You are on vacation somewhere in the world and want to enjoy the next episode of your favorite season. However, your streaming service tells you it is only available in your home country. Here, a VPN may virtually transfer yourself back home — or even wherever you could want to be on the world — enabling you to enjoy the next episode.

One first step when attackers try to tinker with your environment is often referred to as “enumeration”. As I am running multiple web services on my own servers, I am of course curious what all this open source software actually does. So let’s start with enumeration of your web servers.
Naturally, I can deep dive into their source code and evaluate their functionality. However, this post shall show you how to find information on your web servers without looking at the code through enumeration.

## Port Scanning

Let’s start with a remote test. Currently I am at work and could only scan my home network from outside. Let’s do this.

Looking at my home network from outside, we see that there are several ports open. Let’s redo this detection and try to see wether nmap can get some more information on the services it listed. The -A flag tells nmap to try to detect the operating system, the versions of software used, to scan for (nmap-) known vulnerabilities and to perform a traceroute to estimate our server location. Also, this time I added the flag -T4 which basically tells nmap to use a faster template scheme. This speeds up the analysis but may be less successful in detecting everything.

Host is up (0.011s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE    VERSION
23/tcp   open  ssh        OpenSSH 8.3 (protocol 2.0)
53/tcp   open  domain?
443/tcp  open  ssl/http   nginx 1.18.0
| http-robots.txt: 1 disallowed entry
|_/
|_http-title: Jitsi Meet
| ssl-cert: Subject: commonName=meet.yourhostname.or.ip
| Subject Alternative Name: DNS:meet.yourhostname.or.ip
| Not valid before: 2020-08-18T12:17:00
|_Not valid after:  2020-11-16T12:17:00
|_ssl-date: TLS randomness does not represent time
| tls-alpn:
|   h2
|_  http/1.1
| tls-nextprotoneg:
|   h2
|_  http/1.1
2000/tcp open  tcpwrapped
5060/tcp open  tcpwrapped
8008/tcp open  http
| fingerprint-strings:
|   FourOhFourRequest:
|     HTTP/1.1 302 Found
|     Location: https://:8015/nice%20ports%2C/Tri%6Eity.txt%2ebak
|     Connection: close
|     X-Frame-Options: SAMEORIGIN
|     X-XSS-Protection: 1; mode=block
|     X-Content-Type-Options: nosniff
|     Content-Security-Policy: frame-ancestors
|   GenericLines, HTTPOptions, RTSPRequest, SIPOptions:
|     HTTP/1.1 302 Found
|     Location: https://:8015
|     Connection: close
|     X-Frame-Options: SAMEORIGIN
|     X-XSS-Protection: 1; mode=block
|     X-Content-Type-Options: nosniff
|     Content-Security-Policy: frame-ancestors
|   GetRequest:
|     HTTP/1.1 302 Found
|     Location: https://:8015/
|     Connection: close
|     X-Frame-Options: SAMEORIGIN
|     X-XSS-Protection: 1; mode=block
|     X-Content-Type-Options: nosniff
|_    Content-Security-Policy: frame-ancestors
|_http-title: Did not follow redirect to https://yourhostname.or.ip:8015/
|_https-redirect: ERROR: Script execution failed (use -d to debug)

This time some things changed. While in the simple portscan in the beginning, nmap guessed port 23 would be hosting a telnet service, this time it shows us, there is a OpenSSH server with version 8.3 although you would normally expect it on port 22. Second, port 53 seems to host a dns service. Also, nmap detected that port 443 is actually hosting a webserver running TLS and also that there is Jitsi Meet running on it. For ports 2000 and 5060, we still do not know what’s that. Finally, port 8008 has another webserver on it, however we do not have further information on it.

## Banner Grabbing

In fact, all of this is correct. But how did nmap get information on the changed port of ssh? The answer here is banner grabbing. That’s the same process that also shodan.io uses.
In short, you open a TCP socket to the port and start receiving bytes from it. I have quickly scripted this in python to demonstrate.

#!/usr/bin/env python3
import socket
import argparse

def main(args):
s = socket.socket()
print(s.recv(1024).decode("utf-8"))

if __name__ == '__main__':
prs = argparse.ArgumentParser("Banner grabbing in python")
args = prs.parse_args()
main(args)

If I run this against the skewed service, we get the same information also nmap got.

So, the remaining question is what changes if we have local access to a server and want to check for available services. Long answer short, not much. I scanned my machine with nmap localhost -A also locally and found some more services which are only published locally or in the internal network. I hope this enables you to get started with nmap and trying to find out, what’s actually on your network. By the way you can also scan a whole network with the CIDR notation like

nmap 192.168.178.1/24

This post shall give you an overview of methods to assess the cybersecurity risks of your products or services. We will go through the development of risk analysis and try to understand why every part is there and what you need it for.
Before we start, please be aware, this will not give you a one fits all approach or will cover the complete research from this field. Rather the goal is to make you understand what it takes to assess your IT security risks as well as prepare you to conduct basic risk assessments in the fastest and easiest way possible. So let’s directly jump in.

In the following Sections, I introduce and give examples for different approaches on how to get an overview of possible IT security threats to your systems.

## Directly assessing risks

Throughout this post we will use and come back to one very specific use case. This will serve us as an example to better understand the single steps.

A developer uses his own laptop running the Ubuntu operating system to develop a web application for a company.
For this, he uses Visual Studio Code as a development environment and a mariadb database (SQL database) as part of his backend. The web application itself uses HTML, CSS, and JS on the frontend and PHP on the backend. For testing and the later deployment he uses an apache web server. Finally, the developer already knows about IT security and employs only secured connections using TLS1.2 and server-side certificates.

Web development use case

For a direct risk assessment, we could now try and start to come up with possible problems. Just by looking at the example description several problems come to mind:

• A public exposure of the database may allow for unallowed data access.
• A misconfigured web server may leak (confidential) data.
• If the web application gets input from an end user to build up database queries, there is a risk of SQL injections.
• etc.

After establishing that list, we can now rate the associated risk with each threat, e.g. on a scale from low to high.

This approach has a quite obvious problem. Because you can only come up with an extensive list of possible risks if you bring a lot of experience. Therefore, the risk analyst must have already a feeling on what might go wrong given this setup. This method, hence, can only be useful as a first guess in a very brief discussion of the overall system. For other settings, a more methodological approach is needed.

## Asset-based assessment

As first step, many risk assessment methods consider the establishment of a list of assets which are worth protecting.
In our example from above such a list could be:

• the developed code of the web application (may be intellectual property)
• the private keys of the web server (during development and production use)
• data stored in the database
• data stored on the web server (machine)

### STRIDE

Given this list, we could now think about possible threats to these assets. The probably most common tool to do so is STRIDE developed by Microsoft. STRIDE stands for

• Spoofing
• Tampering
• Repudiation
• Information Disclosure
• Denial of Service
• Elevation of Privilege

With this information, we can start out our first basic method. Let’s put the assets in a formatted table.

### Threats

For each of these assets, we can now evaluate whether the STRIDE threats apply. Let’s do this.

With this list of threats we can now once again rate the risk associated with each threat qualitatively on a scale. Like in the previous example, we still need to assess this with our experts’ knowledge.

### Cumbersome threat lists

Well, ok. I cheated a bit. After only two assets I got exhausted with writing up the threats. When reading through them, you might have noticed a lot of them actually do not really make sense. This is because some of the threats, like Elevation of Privilege of a private key, simply do not make sense. So, if we want to get through this faster we need a better filtering on where to apply which of these threats. I must notice, a lot of people also use violated security goals instead of STRIDE. Then, we would have

• violation of confidentiality (information disclosure)
• violation of integrity (tampering)
• and violation of availability (denial of service)

### Meaningful threats

While the list of possible threats is then reduced to half the size, for larger systems this still does not solve the problem of a long list to go through.
In order to filter this list even more let’s think about when the different proposed threats are actually relevant.
Spoofing happens during data transmission. Either the sender or the receiver could be spoofed.
Tampering applies to data in transit, data stored at locations, as well as to functions or processes. Here, a function or a process shall be the logical unit which utilizes and possibly transforms data at any location.
Repudiation can happen if data has changed or was transmitted.
In these two cases, also information leakage may occur.
Denial of Service applies to data transmission and functions or processes.
Finally, elevation of privileges does not apply to the system under evaluation but rather to the attacker. It describes an entity carrying out an activity which it was not authorized to. This either requires tampering or spoofing with the system or an authorized user-action combination which becomes unauthorized under certain not modeled constraints.

## Goal-based assessments

Before we continue with the next method I want to mention another approach to identify threats to your system which is quite similar to the previous asset-based assessment.
Attack trees are an already long-known method to analyze the steps an attacker must perform to reach a goal. In our example we can define different goals of an attacker, e.g. implementing a backdoor in the web application. So let’s think about how he can achieve this.

Ok… I admit. Once again I cheated and do not provide the full attack tree but only an excerpt so that you get the idea. Except for the topmost branches from the and node all edges indicate alternatives. To implement the backdoor the attacker needs to get access to the code and have some backdoor code available. All other possible steps have several alternatives to reach these to sub-goals. Compared to the asset-based assessment method, this approach yields a much more fine-grained analysis of the system and potentially shows you already the easiest way for an attacker to break your system.
However, it answers a slightly different question. An attack tree explains how an attacker might reach his goal but not what his goal might be. For the identification of possible goals an asset-based assessment is way more suitable. But attack trees are then a perfect companion for your asset-based assessment if you want to better understand how your assets might be attacked, which controls you may introduce and how hard an attack actually is.

## Model-based assessments

So, in the previous sections we explored that for a better mapping of threats we can rely on basic assumptions about the STRIDE threats and refine our system model to data exchanges, stored data, senders, receivers, and functions and processes.

With these refinements, our model now looks a bit more detailed.

While writing up these tables does not take any effort away compared to the asset-based method, it will serve us quite a bit when rating the risks. First of all, we get a way more detailed picture of the system under evaluation and effectively enable others to understand our reasoning about whether and where one of the STRIDE threats applies. Second, we eliminate many of the previous meaningless threats which did not make sense. And most important, our threats automatically become more granular what makes them easier to asses. If you think about it: it does make a difference — concerning a threat’s likelihood — whether you try to manipulate (tampering) the developed web application code (D5) while it is at rest on VS code (E1) compared to when it is transferred to the web server (F2).

On the other hand, this model-based assessment requires some efforts in advance to construct the needed system model. However, from my experience I strongly suggest you go this route. Often it is this very step which already shows developers and stakeholders their weaknesses. Especially in big companies it is more often the system complexity which introduces IT security threats instead of missing knowledge.

Until now, we covered a lot of state-of-the-art methods. In the next part of this new series we will touch upon more technical details of the approaches and develop a template for assessments.

Following up to my entry about circumventing geoblocking (https://nextlevel-blog.de/circumventing-amazon-geoblocking-the-tech-way/), I’ve written a small python program solving my Netflix blocking problem.

I realized that Netflix is not working as they rigorously ban the use of VPNs or proxies. Hence, it does work when using my home server as gateway/router but not when I additionaly enable the VPN.

First, I tried to enable or disable the VPN when I see the corresponding DNS queries for either amazon or netflix services. However, it turned out that my TV communicates with each other service regardless of what I am currently using. Therefore it is not possible to identify the desired state.

To ease the trouble at least a bit, I wrote a small webserver which allows me to make the switch with the click on a bookmark.

The following code snippet shows a default setup for a Flask server in python. It listens on the defined IP address and port.

#!/usr/bin/env python3
import os

if __name__ == '__main__':
app.run(host="192.168.42.19", port=5000)

## Handling the VPN

To handle the VPN switching I just remove or add the ip rules to forward the traffic either to my wireguard table or using the default one.
A very hacky check routine identifies the current routing state.

def enableVPNRoute():
print("enable VPN")
os.system("ip rule add iif enp3s0 lookup 51820")
os.system("ip rule flush")
pass

def disableVPNRoute():
print("disable VPN")
os.system("ip rule del iif enp3s0 lookup 51820")
pass

def isVPNOn():
return "iif" in ir

## Establishing the Routes

Finally, Flask introduces the app.route directives which I now use to enable the VPN if the webserver is called as http://192.168.42.19:5000/amazon or disabled as http://192.168.42.19:5000/netflix.

@app.route('/amazon')
def amazon():
if not isVPNOn():
enableVPNRoute()
return "Turned VPN on. Enjoy Amazon :-)"

@app.route('/netflix')
def netflix():
if isVPNOn():
disableVPNRoute()
return "Turned VPN off. Enjoy Netflix :-)"

Enjoy your next movies and shows 😉

As a scientist my usual workday can be divided in three, sometimes 4 essential steps:

1. Reading a lot of information
2. Looking out of the window ⇒ that is, thinking
3. Writing down new results
4. And finally sometimes, implementing proof-of-concepts

As is evident from this list, at least one third up to half of my work, therefore, is made up by writing either reports or code. To improve the efficiency of this task, I started thinking about our standard keyboard layout. QWERTY (or QWERTZ for the Germans) was invented to prevent neighbouring letter arms in your typewriter from getting tangled up with each other. You may notice that optimizing for this goal may not result in an optimal setup for typing.
But, stop, let us think about that. We do not actually use typewriters anymore. And our main goal for optimization, thus, changed. Hence, there were several attempts of enhancing typing efficiency in different languages. With the beginning of 2017, I started to use the German layout variant neo. It is similar but not equal to the more general DVORAK.

The idea of these layouts is to reorganize the letters in a way, that the most common and often used keys lie closer to each other and reside on the home row. The home row is the one your fingers will rest on anyway, when using proper 10-finger typing.
By adopting to such a different keyboard layout, you, thus, gain the advantage of having to move your fingers much less. Some people who converted to these layouts claim that it helps them remove strain on hands and fingers. Others suggest that by having your fingers move less they are able to type faster.

Looking at the first layer of keys in the neo layout in the image above, you see all the vowels residing on the left side of the home row. In German, there only rarely are two vowels following each other. Hence, it makes sense putting them closer together. After practicing this layout now for one and half a year let me put down the pros and cons.

Finally, for me the benefits definitely outweigh the drawbacks. I typically only work on my own machines where I have full control over the keyboard layout used. While it took me only two to three months to learn the layout, I needed an additional year to reach my previous typing speed. However, when you start out fresh you get the chance to learn typewriting from scratch allowing to eliminate all your bad habits on the fly. This was a big deal for me as the new layout entirely forces you to type blindly.

After 1.5 years, I still enjoy the calamity when typing prose as it now feels much more like almost flowing out of my hands. For me, this really is a boost in typing real estate — if there is such a thing.

The real benefit of neo comes for me with the fourth layer. These are the symbols you type in combination with the right ALT key or the one next to the left SHIFT key.
With this layer you get access to movements with the arrow keys, page up/down and home and end keys. Especially when working on mobile computers which often lack these keys, this makes editing longer texts way more efficient. For all those experimental scientists the availability of a virtual numpad in this layer is also great.
However, I only use it if I really have a lot of numbers to enter. For the occassional entry of a few numbers I still prefer the normal number row.

On the third layer, you find most of the symbols you need for programming as well as special things like @ and different hyphens.
While it might seem cumbersome to have things like / or { only accessible with pressing two keys at once, it is fine for me as the needed movement of the fingers is small enough.
In addition, it is great having access to a dash -, the n-dash –, and the m-dash —. If you do not know the difference, I encourage you to look it up. Good texts shall have the right symbols.

After moving to a different country and having some days of vacations left, one of my concerns was to get the whole home IT setup working again. Precisely, for me this involves setting up my server again, building the network infrastructure and wireless access points as well as ensuring all my everyday services are up and running. One thing I came across when doing this, is the hassle of geoblocked streaming services which I still do want to use. So while my Netflix subscription still works out-of-the-box on my TV, the Amazon Prime Video service is not working anymore. Hence, in this article I will show you how circumventing the amazon geoblocking features is actually possible with only open-source tools.

I can still sign in to my Amazon account and see all the movies and shows. However, nothing is available to be watched in my country, i.e., in Denmark. After a while I realized, I am not looking at my German Amazon interface but the UK one. No surprise, I can login to the UK Prime Video service. But as my Prime subscription runs on the German branch, I do not have access to the UK movies and shows.

## What does the TV do?

So, I had a look at the network traffic of my TV. I noticed that the communication with Amazon always started with a call to the website atv-ext-eu.amazon.com. First, I thought this website uses some kind of geolocating my IP address and then determining to which branch I shall be redirected. During transmission of this address, I thought I can intercept the response and alter it to the German branch. Then, I would have my TV once again talking to the German Prime Video. However, there was no response and apparently the forwarding happens on the server-side.

## What then?

Normally, people use proxies or VPN services circumventing these amazon geoblocking problems. Unfortunately, my TV neither comes with a proxy or even a VPN configuration possibility. One solution, thus, would be to configure my router to route every traffic via a German VPN. But, then I would always go through this bottleneck, even for services which do not need it. On the one hand my router does not allow for such a configuration, on the other hand, this is not a viable solution for me. Hence, I built my very own approach to solve this problem.

## The setup

I have a small server in my network for different services anyway. The idea is simple: intercept all traffic from my TV and route it through a VPN endpoint in Germany.

## Using an ArchLinux server as a Router

First step is to configure the IP address of my home network server as the default gateway of my TV. Then, we need to make sure that the server actually forwards IP packets destined for other machines.

sysctl -w net.ipv4.ip_forward=1


## VPN connection

For the VPN connection, I ordered a small virtual private server in Frankfurt, Germany, with unlimited traffic and a decent enough bandwidth. This server and the server in my home network are configured to establish a wireguard connection. While the main setup is as usual I took some modifications. The general setup for a VPN server is well explained on the ArchLinux wiki: https://wiki.archlinux.org/index.php/WireGuard

### Server setup

The server will be the machine in Frankfurt.

[Interface]
ListenPort = 51871
PrivateKey = PRIVATEKEY_OF_VPN_SERVER

PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

[Peer]
PublicKey = PUBLICKEY_OF_HOME_ROUTER
AllowedIPs = 10.42.0.2/32, 192.168.42.180/32
Endpoint = HOME_NETWORK_IP:51871
PersistentKeepalive = 25


In contrast to the default wireguard VPN setups, I added the post up and down directives to ensure the VPN server actually performs network address translation. Obviously, you need to have iptables up and running for this. A second important point is, that we need to include all IP addresses in AllowedIPs which may redirect traffic through this VPN. As I have different subnets for the VPN itself and my actual home network, I have the wireguard address of my home server (10.42.0.2) and the local IP address of my TV (192.168.42.180).

Afterwards, we can just start the network interface with wg-quick up wg0.

### Client setup

The client is the machine in my home network.

[Interface]
ListenPort = 51871
PrivateKey = PRIVATEKEY_OF_HOME_ROUTER

[Peer]
PublicKey = PUBLICKEY_OF_VPN_SERVER
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = VPN_ENDPOINT_IP:51871
PersistentKeepalive = 25


## Restricting the VPN to Router Functionality

As I wrote in the beginning, I use the server in my home network for several other services which are in parts also publicly available. These services I do not want to route through the VPN but through my original internet connection as I do have a much bigger bandwidth there.

With iptables, wireguard uses a second routing table to separate its rules from your normal routing. On my system, this second table got named 51820. Wireguard then just creates one rule which captures all traffic not marked with 51820 to go through the new second routing table. At the same time, wireguard itself is able to mark all traffic coming from the remote endpoint with that mark. Hence, it is routed using your default table. To prevent routing every traffic through the VPN, we exchange this mark filter for a better suited one. After starting the wireguard interface with wg-quick up wg0, I remove the general routing rule which enforces all traffic to go through the VPN. Then, we replace it with a rule to only use the second routing table if we see packets coming to us on the plain network interface, i.e., enp3s0 on my system. This is done using the iif (incoming interface) rule.

ip rule del not fwmark 51820 lookup 51820
ip rule add iif enp3s0 lookup 51820


Therefore, all traffic originating at the machine itself will be routed using the default table while only packets which arrive on enp3s0 and are to be forwarded will use the second table.

Finally, now I can have every device in my home network choosing between two different gateways, the Danish one at .1 and the German one via the wireguard VPN on .19. Everything needed is just the change of the gateway on the corresponding device and appending the IP address on the VPN server to the AllowedIPs in the wireguard configuration.

## Only Drawback

For now, everything is working fine: Prime Video and YouTube play well over this setup. But, for some unknown reason, I cannot connect to Netflix with this setup. The debugging showed it tries to reach three different servers while only two of these connections are successful. I still need to figure out what is wrong here…

Markdown became one of the most powerful tools in my daily business of doing research. Nowadays, everyone faces a lot of writing work. While LaTeX which I learned during my studies at universities is a great deal for creating beautiful and reproducible documents without all the Microsoft Word hassles, it has some major drawbacks.
Some people claim, it is way too hard to learn and not as understandable as WYSIWYG (what you see is what you get) editors, others – like me – are just annoyed by constantly having to type one of \ { }. If you are interested more into LaTeX, I suggest reaching out for your favorite search engine which will have plenty of ressources for reference.
For me, markdown appeared as a perfect trade-off between these two worlds. Hence, I show you how you can also benefit from using pandoc for academic writing tasks.

## Primer on Markdown

When talking (or writing) about markdown, I need to clarify that I will discuss here the pandoc markdown dialect. To my knowledge it is the most powerful dialect and conversion tool. Just have a look on the awesome conversion possibilities indicated on their homepage.

If you have not worked with markdown yet, let me shortly introduce to some of the basics. This will not be a complete tutorial to pandoc and there is way more functionality than I could describe in just one article. If you are interested have a look at the manual or leave me a comment what you need to know or need help with.

### Basic Text / Paragraphs

Text is just plain text. There is nothing special about it. Just write it. That is the actual beauty of markdown. It allows you to focus on the most essential part of your writing. If you happen to need to indicate some formatting like italics, bold print or strikethrough text you have simple commands to your hand.

*bold* renders to bold,
**italic** to italic, and
~~strike~~ to strike

A heading in markdown is indicated by a hash sign # followed by the name of the heading. Different levels of headings just use the corresponding amount of hash signs. Normally, headlines will be numbered. Unnumbered headings can be added by appending {-} to the end of the heading – this is actually short for {.unnumbered}. Additionally, if you are going to output to latex or pdf files, starting with heading level 4 you have access to paragraphs.

# Headline 1 {.unnumbered}
#### Paragraph in Latex
These are generally unnumbered

Tables in pandoc markdown can easily be written by separating the columns using pipes |. The first line will automatically be converted to the table heading. You indicate the formatting of the table, that is whether you want to have your text in the columns being justified left, right or centered, in the second line.
:--- thereby indicates left justified, ---: correspondingly right justified and :---: centered. Be aware, that the relative amount of dashes indicates how wide the column shall be and that you always need at least three dashes! Thus, :---|------: means we have two columns. The first is left justified, while the second is right justified. Additionally, the second column should take up twice as much width as the first. A caption can be added to the table by a newline beneath the table beginning with Table: caption goes here.

In my experience, these formatting instructions are really easy and very useful when creating simple documents. For scientific work, however, I most of the times rely on the inline latex feature which pandoc provides. This allows you to write arbitrary latex code right inside the markdown file at any place. During the document conversion, pandoc will just skip this part and copy it as it is to the final latex document. Just keep in mind that you will loose these parts if you do not export to latex in the end.

# Pandoc style tables
:---:|:---
centered col | left aligned col
Table: Caption for the table (and yes, it gets converted to real captions ;
-))
# Latex style tables
\begin{table}
\begin{tabular}...
\end{tabular}
\end{table}
Does also work.

### Mathematics

If you know LaTeX, you probably enjoy typesetting equations in latex. It is way more easier than it is in WYSIWIG editors. Pandoc markdown allows to use the same notations as LaTeX. So you can just write your equations as you are used to.

$$a = \sum_{\forall a_i \in A} a_i^2$$

One of the reasons, why I love pandoc markdown so much is, that the above code which renders a beautiful latex formula can be converted to a valid MS Word equation. It will not convert to an image and include that, but instead creates the according equation object. So far, this feature saved me several features and sometimes I just start a document for one equation, convert it and copy it over to some document I am working on. Just amazing – thanks to the developers!

### Bibliography

For academic writing, you definitely need to know how to reference to previous work using pandoc. In markdown this is as easy as it might get. If you are already used to the latex style of using \citet{} or \textcite and \citep or \cite, you will enjoy how easy citing can be.
First of all each markdown document can have a preamble like a latex document providing some metadata. This preamble starts using three dashes and ends the same way. A bibliography file can be provided as shown below.

---
bibliography: bibfile.bib
---
# Heading 1

Actual citing is now as easy as referencing the corresponding bibtex key. What I really like is, that this concept even works if you finally output to MS Word documents. It will give you correct citations from your bibtex bibliography.

As is shown by @Nohl2014 (for in text citing \citet)
The BadUSB publication [@Nohl2014] (for in parentheses citing \citep)
Multiple authors can also be cited [@Nohl2014; @Langner2013].

One of the benefits of writing in markdown is the possibility to export your final document to every format you may require. For example, you can generate a pdf file using the options -t pdf or by exporting to a latex source -t tex and doing the final document creation on your own.
The full command for exporting to even a word document would then look like below.

pandoc document.md -t docx -o document.docx

For academic writing, however, I prefer to export the pandoc document to a latex document which I then include in the required latex template. Thus, I export to an intermediate file paper.tex and then use \include{paper.tex} inside the main document, e.g. sample.tex.
Additionally, experience has shown that pandoc sometimes outputs some commands my latex templates do not recognize. These are usually concerned with tables. I, therefore, replace these commands with my table styles.
Placing the full pipeline a file called Makefile in the same folder then allows for using make on the commandline for producing the final output.

pandoc paper.md -t latex -o paper.tex --bibliography bibfile.bib --natbib
--top-level-division=section --toc
# Convert longtable to supertabular
# Requires \usepackage{supertabular}
sed -i -e 's/longtable/supertabular/g' paper.tex
# Compile tex document.
# sample.tex contains the preamble, style and command definitions and has
a \include{paper.tex} for the actual content
xelatex sample.tex
bibtex sample
# Recompile for bib and toc updates

## Producing Slideshows

As shown in the previous section, you can produce beautiful documents by just writing markdown.
The magic of pandoc does not stop by just producing documents. You can even create awesome slideshows with it.
Just follow the same principles as before and make sure you have headings from all levels 1 through 3. Each level 3 heading designates a new slide. By using the -t beamer option, you can then render awesome pdf slideshows out of latex code.

pandoc -t beamer --listings tool.md > input.tex
# Compile presentation.tex which holds preamble, style and command
definitions and has a \include{input.tex} for the actual content
pdflatex presentation.tex

Finally, also make sure you use a decent editor for this workflow. If you need an inspiration on how to use emacs for pandoc/markdown editing have a look in my setup for emacs.

That’s quite awesome. A bit of googleing solved a major problem on my Thinkpad x220t concerning very bad wlan connectivity in some cases. I just followed the suggestions on https://bbs.archlinux.org/viewtopic.php?id=132079 and it helped a lot. My wireless connection is a lot more stable, connects are done awesome fast and I have no more problems with random disconnects after few minutes.

Just enable those two kernel parameters for the iwlwifi module:

11n_disable=1swcrypto=1

xset dpms force off