2 SHORT TALKS

To this summer's Chaos Communication Camp, I brought a nice Tipi which hosted - besides being a permanent chillout and tea area - two short talks and subsequent discussions in the area of the following topics:

PRACTICAL OPSEC

Version 1.0 of work-in-progress dealing w/ what I call operational security, which covers device security, encryption and redundancy as well as concepts of abstraction and modularization.

BLACKOUT SURVIVAL

Inspired by the book from Marc Elsberg as well as by my numerous hiking and trekking adventures, this presentation covers topics that are surprisingly easy to implement for yourself and that you might consider important for your own survival.

Interestingly, some of the concepts included overlap w/ the ones in the previous talk, which might be seen as a proof for the fact that abstraction, modularization and the forming of functional groups seem to be an important base for efficiency in any system.

UPDATE: v2 of Blackout Survival emerged into COVID-19 info and is available @ https://geeksec.de/0.pdf

L3 Hardening: Gen.2 GWx

The GWx concept, originally introduced to come up w/ a solution against regularly occuring DDoS attacks, has recently undergone a massive consolidation process which has proven to work very well.

Gen.1 Refresher

As a short refresher, the Gen.1 GWx concept introduced in mid 2017 worked by deploying at least 4 systems that divided the attack flow amongst them by using round robin DNS, which enabled pre-filtering traffic at 4 locations. But since then, the attack bandwidth of distributed reflected DoS attacks steadily increased, for the first time mostly paralyzing all 4 Gen.1 GWx systems.

Gen.2: Implementation

So, it was time to overhaul and come up w/ a new and more efficient concept: Gen.2 GWx. Generally, the idea of dividing the attack traffic into multiple smaller flows is still the foundation of the whole concept, so that introducing even more distribution by adding multiple IP addresses each having it's own route in a different country was the next thing that came into mind. If you have 4x GWx, you could easily assign 8 IPs to each of them resulting in 32 IPs for the whole grid. This also takes into account the fact that those shady "DDoS Stresser" sites have their own business model, namely restricting their skript-kid "customers" to a limited number of IP addresses to attack.

Gen.2: Hardening & Stealthiness

When it comes to hardening the GWx systems, an extended packetfilter ruleset had to be implemented, mainly blocking (as in silently dropping) a lot more of INVALID traffic, packets w/ weird flags and so on. Also, it became a priority to protect 2nd layer upstream traffic which is not directly attackeable and is kept as secret as the backend IP addresses at the 3rd layer. Additionally, the DNS configuration returns only a small portion of valid IP addresses which is randomly cycling (w/o the help of lavalamps). The network security monitoring of netflow and syslog containing packet filter information remains highly relevant enabling us to classify attacks and subsequentially adjust mitigation concepts

In combination w/ the provider's own DDoS mitigation, this concept seems to be highly efficient, remains self-hosted thus under full control, can be easily expanded and comes at a fraction of the costs for the shady krautflaire approach. However, let's keep in mind that the base for many of the rDDoS attacks are old, unpatched and badly maintained systems as well as upstream providers that let invalid traffic pass.

Smartphone Hardening: Motorola

Installing the LineageOS distribution on a Nexus 6 as a base for hardening has become very easy and takes only a few minutes w/ adb and fastboot (part of the android-sdk) already installed on your POSIX machine.

  • Enable USB Debugging in the phone settings
  • Connect device and check by running adb devices -l
  • Reboot into bootloader: adb reboot bootloader
  • Again, check connectivity: fastboot devices -l
  • Unlock the bootloader:
fastboot oem unlock
(bootloader) slot-count: not found
(bootloader) slot-suffixes: not found
(bootloader) slot-suffixes: not found
 ...
(bootloader) Please select 'YES' on screen if you want to continue...
(bootloader) Unlocking bootloader...
(bootloader) Unlock completed! Wait to reboot
  • Flash recovery:
fastboot flash recovery twrp-3.3.1-0-shamu.img
 (bootloader) slot-count: not found                 
 (bootloader) slot-suffixes: not found                                  
 (bootloader) slot-suffixes: not found                        
 (bootloader) has-slot:recovery: not found
 target reported max download size of 536870912 bytes
 sending 'recovery' (11887 KB)…                                  
 OKAY [  0.393s]                                                   
 writing 'recovery'…                                             
 OKAY [  0.171s]                            
 finished. total time: 0.564s 
  • Select Recovery Mode and reboot into TWRP
  • Backup existing ROM (system partition) onto USB-OTG
  • Install LOS: Wipe/Factory Reset, then install - e.g. using adb sideload - a) LOS b) OpenGapps c) addonsu

Note about encryption: When running LOS 15.1 and TWRP 3.2.3, encrypting the device results in a completely unstable O/S. When running LOS 16.0 and TWRP 3.3.1, encrypting the device works, resulting in a usable O/S. However, TWRP is still not accepting the FDE password thus unable to mount /data.

So, after having flashed the device, now is the time for further hardening of the device by installing a local firewall ruleset like AFWall+, VPN software like WireGuard, Firefox Klar/Focus, K-9 Mail, OpenKeychain, SnoopSnitch etc. and creating a backup using TWRP.

Also, sort of circumventing the previously mentioned encryption problem would be easily possible and can be achieved by first creating a fresh TWRP backup and subsequentially encrypting the device. Think of this becoming handy right before travelling abroad, but remember that afterwards you should a) WIPE the device using TWRP (w/o unlocking data) and then b) reflash the last backup to be able to create fresh backups from time to time.

Smartphone Hardening: Samsung

A smartphone like the Samsung S4 bought only a few years ago will most probably run Android 4.4.x "Kitkat" (or 5.x  if upgraded), as this is the stock ROM it contained right after market introduction. New devices are still sold for ~ 150€ running Android 5.x "Lollipop" which is nearly equally old. I already flashed Cyanogenmod 11 back then to have more control over the device along w/ root access which enabled me to configure netfilter and install VPN S/W. But if you follow the Android OS version history it becomes immediately clear that  - as the ppl at LineageOS state - 7/10 run outdated operating systems on their phones. This is a matter of upgrading your device, and that is what I just did, involving testing of lots of different ROMs and Android versions, which I'm going to skip in this post.
GT-I9506
The steps to upgrade a S4 LTE (official release date may 2013) from 4.4.x to a quite actual and rooted 8.1 "Oreo" are as follows if you reduce them to the minimum and exclude all the time spent on testing and research: Step 1: Use heimdall to flash TWRP recovery system onto the device. This can simply be done from the commandline after you put the phone in Download Mode (by pressing VolumeDown+Power while turning the phone on):
sudo heimdall flash --verbose --RECOVERY recovery.img
Step 2: Use heimdall to flash an updated baseband firmware containing an updated kernel and phone/modem related firmware. I prefer the GUI for that step as it gives a far better overview of what we are doing. This is not as hard as it looks: After you downloaded the .tar file, extract it to a temp folder and see which files it contains. Afterwards, use heimdall to download the devices partition layout table (PIT). Next thing to do is select the PIT file, then hit the "Add" button and select each partition and its according file from the folder you extracted the .tar file, select "No Reboot" and "Resume" and finally hit the start button. Step 3: Flash a new ROM onto the device via TWRP. Start the Device by pressing VolumeUP + Home + Power to enter its recovery mode. From there, select relevant files in the right order (and compare its checksums) which in my case were:
lineage-15.1-20180915-UNOFFICIAL-ks01ltexx.zip d3213c4895e2565ee3a7f3dd0d47aedcbe9f621eb8f89f9c51351d92573ae5dd
addonsu-15.1-arm-signed.zip  b5cc465abb3d9b7ad0177e74693e1bbd085775fd38808f640be537e8dcd1a3e8
open_gapps-arm-8.1-nano-20181013.zip  e544ad0aea8702d73f2b2451e42c83cb96157881ce7879dcdea11e2bb4835718
It appears to me that it is easily possible - and even by means of only using freely available S/W - to update all those horribly insecure smartphones out there, and it's even far more easy to achieve than back in the days. So - I ask myself - why is there no public service offered by the shop you bought your phone at that enables non-technical ppl to get this done eradicating that bad thing called planned obsolescence
SM-T585
Upgraded from stock Android 6.0 onto LineageOS 15.1 / Android 8.1 on a SM-T585 Tablet (2016) as well (search for "sm-t585" or "gtaxllte" for relevant TWRP and LineageOS images):
sudo heimdall flash --verbose --RECOVERY recovery.img
Initialising connection...
Detecting device...
      Manufacturer: "SAMSUNG"
           Product: "Gadget Serial"
...
100%
RECOVERY upload successful
Ending session...
Rebooting device...
Releasing device interface...
Interesting to note that this time the device itself does not really get identified. Last but not least: Do not forget to create and redundantly store  backups of the device(s) when finished w/ configuration et al.
GT-I9195i
Doing the same for a S4 mini LTE a.k.a. GT-I9195i a.k.a. serranoveltexx (official release date june 2014) running stock android 4.4.4. TWRP already flashed, important to note that heimdall v1.4.2 - as for the two previous devices - had to be built from source to really work:
git clone https://gitlab.com/BenjaminDobell/Heimdall.git
cd Heimdall
cmake . && make && sudo make install
Remember to install some dependencies (like libusb-dev, libqt5 etc.) mentioned in cmake warnings / errors and it builds w/o error and flashes the device successfully. Flashing a lineage 14.1 image now is only a matter of copying relevant ZIP files and MD5 sums of OpenGapps, addonsu and the image itself to SD (or USB-OTG) and booting the device into recovery, doing a factory reset and installing the following sha256 checksummed files:
lineage-14.1-20191010-UNOFFICIAL-serranoveltexx.zip 92715821b7dd4c1906512e75dd8327c50af3eeb5865626a65a04907b1e900704
open_gapps-arm-7.1-nano-20191012.zip 97420755446608ea226817322883192ba0c56ce4703feed9c52dc3344656ab2b
addonsu-14.1-arm-signed.zip 1c0953b2eb3c5d2e88eeb7df4d60709aeb18e8acf56fb380ce83f5acb3dcbb8f

L3 Hardening: GWx DDoS Mitigation

In the newer ages of the internet, denial-of-service attacks (DoS), their distributed variants (DDoS) and its newest reflected species (DrDoS/rDDoS) took, take and will take place increasingly often. To explain this very quickly: A Denial-of-Service (DoS) attack takes place when a single host attacks another host over the network. Distributed Denial of Service (DDoS) means that lots of often geographically dispersed aggressor hosts conduct the attack (trinoo, tfn2k and stacheldraht were famous tools for that purpose in 2001). As you can imagine, the first DDoS attacks were pretty spectacular b/c of the bandwidth achieved. Afterwards, more sophisticated Layer 7 (Application Layer) attacks were developed, then reflected attacks and finally amplification came into play (see wikipedia for more details). All these attacks are not only proof of  weaknesses and/or design errors in underlying internet protocols or network service daemons implementing them. They are also depicting their potential power, as such attacks can be a equally handy and  efficient tool for governmental entities and/or their military executive branches that have an interest in e.g. wreaking havoc to a countries essential infrastructure. On a even more sophisticated level, such attacks may be conducted as part of a larger operation w/ the intent to ultimately spoof, intercept or overtake certain communications to and from target host(s) or network(s). The much hyped term "cyberwar" comes into mind, accompanied by a bitter taste of being instrumented by the military-industrial complex to justify questionable regulations and defense budget extensions to "make the internet a safer place". Basic Mitigation Theory If we look into nature, we see that e.g. a river is able to transport certain amounts of water, but when a flood happens b/c of heavy rainfall (a.k.a. distributed denial of service taking place), the original riverbed will be too small to carry all the water which ultimately finds its own ways, forming and rearranging its surrounding landscape by whatever lies on its path. Now, if we look at that on a larger scale, a single river is most of the time only one vein of a certain area's water transportation system, and if floods happen more often, new smaller rivers might be formed to fulfill the need for larger overall capacity. The more rivers there are, the more water can and eventually will be transported w/o the harsh effects of the previous flood. So, a more complex and dynamic river system is potentially able to fully compensate the initial problem. This split-up principle can also be applied during the mitigation of a large-scale DoS, DDoS or DrDoS/rDDoS attack, subsequently described at a basic technical level. Technical GWx Principle Each of the GWx systems is configured to forward and/or proxy packets for given services to the real IP of the productive server. This could be achieved  by implementing packetfilter or routing rules on incoming layer 3 IP traffic or by certain configurations that implement a dedicated proxy / loadbalancer on the application layer. If a network or host has or itself acts as a single gateway, it can be flooded if the amount of data reaches its maximum bandwidth capacity. So, a 1Gbit DDoS attack will most probably fully saturate and thus take down a system connected via a single 1 Gbit link @ GW0. But, if we implement a second, geographically distant GW1 w/ the same linkspeed and use round robin DNS to evenly spread the requests to both gateways, a 1Gbit attack can no longer fully saturate the bandwidth as each of the GWx systems will only receive its 50% share of it. So, a GWx cluster consisting of 3 systems will reduce that to even shares of 33:33:33 percent, 4 systems to percentages of 25:25:25:25 and so on: x systems = overall bandwidth/x per system.  You see that this system comes w/ auto-grown scalability in mind and is ready to be expanded in realtime just by adding more GWx to the cluster and its underlying round robin DNS configuration. GWx Hardening As each and every GWx system will be directly exposed to attack traffic, it should be hardened thoroughly on host and network level. To name only a few, implementing basic packetfilter rules for filtering certainly known-bad, unneeded traffic, and even more sophisticated advances like blocking, delimiting or restricting bandwidth of hosts that send too many requests in a certain timespan, or a mechanism to filter out brute-force attacks to certain services or webpages could be implemented. Extended host and network monitoring also makes sense here, but may heavily depend on your research capabilities or your intention to analyze and further develop your mitigative skillset. Security is a process, and should neither be seen as, nor advertised and marketed as a snake-oilish product. Last but not least, it is of course crucial to retain secrecy of the real IP and also deploy packet filtering there to allow only inbound traffic from GWx boxes to the services protected by them. Practical Insights and Perspectives Having dealt w/ 30+ large scale (that means at least hundreds of megabit up to a few gigabit) attacks only in the last two years, I observed that they shared all of the specific characteristics (4x GWx, 2 providers, 4 DCs):
  • overall attack duration mostly only a few minutes
  • usually shifted by a few minutes
  • maximum + overall attack bandwidth limited
  • attacker unable to fully disrupt GWx protected services ever since
As DDoS attacks and certain, questionable mitigation techniques (as opposed to lotek, simple, functional and achievable) recently have also become a lucrative business model, the "customer" (or rather attacker) most probably pays for a certain package that seems to limit him to a certain target IP at a time and of course a limited bandwidth. Staying rather stealthy in a long-term period seems to also be a plausible demand for the DDoS provider on the one as well as its "customer" on the other hand, so the average attack will take place mostly during high-load periods and last rather short but occur often, so that fully legitimate clients get really frustrated. Generally speaking, and if we left out the fact that core network providers are also able to filter e.g. using BGP, one efficient way to mitigate DoS, DDoS and DrDoS/rDDoS attacks would be to form a cyberarmy of GWx machines, geographically spread all over the world and using different providers and physical datacenters - a technique similarly deployed by the circumventive/anti-censorship tor network. But the GWx cyberarmy - in contrast to  botnets - does not have to consist of hundreds or thousands of machines; we only have high bandwidth servers, ideally carefully chosen dedicated root servers, optionally already DDoS protected in their own network. It could also make sense to have a variable list of GWx systems that could change IPs or even providers every few months (e.g. if the monitoring shows that certain gateways are attacked more often and w/ more bandwidth). In the end, the efficiency of network offense as well as network defense heavily depends on the skillset and creativity of the red and the blue team respectively. Variability and flexibility have always been and always will be an essential part on the road to success, be it for natural species or the survival in a clearly overhyped but nonetheless unambiguously fought cyberwar. From my personal experience, and if you generally look into the successfull spread of lots of things, be it historically relevant inventions or open source software, simplicity is often the key element of consecutive efficiency and widespread usage.

L7 Hardening: Anti-Bruteforce

No matter which services you run - it will not take long until somebody or something will start bruteforcing them. Instead of manually constructing a network-based mechanism like using netfilter string matching combined w/ ipt_recent, it might probably make sense to use what we already have and which does the same: fail2ban. So, as a simple example, lets deal w/ wordpress login bruteforcing. If we look into the server logfiles, relevant entries will contain:
"POST /wp-login.php HTTP/1.1"
So now simply extend fail2ban to include that by first creating
/etc/fail2ban/filter.d/wp-auth.conf
and filling that w/ the following if it fits your site's structure:
[Definition]
 failregex = ^<HOST> .* "POST /wp-login.php
Now just include the new configuration to the (hopefully) already existing
/etc/fail2ban/jail.local
by adding
[wp-auth]
 enabled = true
 filter = wp-auth
 action = iptables-multiport[name=wp-auth, port="http,https"]
 logpath = /var/www/log/error.log
 bantime = 1200
 maxretry = 5
If implemented properly, we just need to restart fail2ban and it should mention the new rule by
2017-10-12 17:39:16,554 fail2ban.jail [7910]: INFO Creating new jail 'wp-auth'
2017-10-12 17:39:16,554 fail2ban.jail [7910]: INFO Jail 'wp-auth' uses pyinotify
2017-10-12 17:39:16,593 fail2ban.jail [7910]: INFO Jail 'wp-auth' started
If underlying principles are well understood, protecting other - not necessarily web-based - services should not be a hard task. Basic IP Recon Out of curiosity, it might be quite interesting to find out where the logged WordPress login bruteforce attacks (in my case, about 150 in only a few hours) originate from. So, lets first write a very basic skript to extract relevant data from our fail2ban.log:
#!/bin/bash
grep 'WARNING \[wp-auth\]' /var/log/fail2ban.log
exit 0
This will printout all the bans as well as the unbans which take place 20 minutes later if the configuration is left in its default state. Now, lets process that data a bit more to first reduce it to relevant content, eliminate double entries, and finally try to lookup the IP adresses involved. A simple approach could look like:
sudo ./WPBF.sh | grep Ban | cut -d " " -f 7 | sort | uniq | nslookup | grep "name ="
and gives us quite some valid information. Mostly originating from .ru and .cn,  perhaps some .jp and .tr, this is quite the usual background noise, of which only 4 look a bit uncommon:
178.135.28.218.in-addr.arpa name = pc0.zz.ha.cn.
213.171.28.218.in-addr.arpa name = pc0.zz.ha.cn.
99.76.28.218.in-addr.arpa name = pc0.zz.ha.cn.
189.224.30.60.in-addr.arpa name = no-data.
Lets checkout the WHOIS information for each of them:
inetnum: 218.28.135.176 - 218.28.135.191
netname: HA-ZZ-USAT-LTD
country: CN
descr: Henan University Science And Technology Limited Company,
descr: No 7 Dongqing Road,
descr: Zhengzhou City,
descr: Henan Province.
Okay, a university network. Like back in the old days :) The next one:
inetnum: 218.28.171.208 - 218.28.171.215
netname: HA-ZZ-HUANJBHJ-GOV
country: CN
descr: HUANJBHJ Gov,
descr: SSDDYEBH,
descr: ZhengZhou City,
descr: Henan Provice.
Hmm, the government....and the last one
inetnum: 218.28.0.0 - 218.29.255.255
netname: UNICOM-HA
country: CN
descr: China Unicom Henan province network
descr: China Unicom
is at least in my experience seen very often in any of portscan, spam, or bruteforce attacks. Now to the last highlighted one:
inetnum: 60.30.224.0 - 60.30.239.255
netname: CNPC-TJ
country: CN
descr: CNPC Dagang Oilfield Communication Corporation
Never heard something like this before - could be interesting if that is really some kind of measuring device or a "normal" PC. Since it got 1723/tcp open, I suspect the former. Also, the question always remains: Are these attacks really originating from these adresses or are they just backdoored boxes? If we do a quick portscan, all four IPs got one thing in common:
9999/tcp open abyss syn-ack
and a quick search reveals that it might be a remote access trojan called "The Prayer". To checkout all hosts for that backdoor, we can simply do s/t like
for i in `sudo ./WPBF.sh | grep Ban | cut -d " " -f 7 | sort | uniq ` ; do nmap -p 9999 $i --host-timeout=1 | grep open -B 3; done

L7 Hardening: Security Headers

There are quite some directives at hand that can be added to your webserver configuration to achieve hardening against many attacks. Most websites - even those that really should - do not care, and thus receive a grade F when being checked by schd.io. It is pretty straightforward to change that completely. In nginx 1.6.2, just edit the site's config file and insert:
add_header Strict-Transport-Security "max-age=31536000";
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Xss-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
The equivalent in nginx 1.8+ would be:
add_header Strict-Transport-Security "max-age=31536000";
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Xss-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
This already gives us a grade C, but there is another powerful mechanism: the content security policy (CSP) is restricting the abilities of the browser to those predefined by you, especially only allowing certain servers to serve certain elements of the site's content in the first place. So lets take a closer look into this basic rule:
add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self' ; style-src 'self' 'unsafe-inline' ";
This is restrictive and works only on static websites not involving any other sources for images, scripts or fonts. As in most if not all cases when dealing w/ security, all this also involves the well known, eternal conflict: security vs. usability. For example, webmail applications and underlying plugins often include inline javascript and thus need a bit less of restrictions expressed by
add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline' ";
WordPress for example would require restrictions similar to
add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com data:";
The most efficient way to implement a valid CSP for your website and/or application is to use the debugging shortcut F12 in the browser of your choice and check the console for relevant messages while at the same time creating parameters for your CSP that fit the actual operational needs.

AVSx: Hardening the SPAM Perimeter

In the course of a recent AVSx rollout, we had the opportunity to mitigate the serious SPAM problem of a customer. This included analyzing the specific situation, considering different approaches to eliminate or at least minimize known issues and also involved a detailed measurement of the spam statistics over time, ultimately leading to a short whitepaper which is yet to be published. One important element of our first approach is to not depend too much on external (or internal) servers at runtime, so that the solution is pretty much self-contained and thus working autonomously. Pre-Rollout Situation The customer receives thousands of spammails per day, many of which get forwarded to the internal mailserver. Since at that internal mailserver, no catchall account is defined (the sub-contractor says that this is not possible), it occurs that non-delivery-reports (NDR) are sent out. And lots of them. This has also led to the problem that the customer itself even got blacklisted in recent months by this well-known backscatter problem. Also, there was heavy usage of black- and whitelists, which may have influenced the overall situation in a negative way as well, since that badly interferes w/ bayes filtering and the overall learning process if not handled carefully. Improvement v1: Static LDAP There are multiple ways to reduce the amount of bad e-mail, but the most commonly used  overall practice is to make use of spamassassin and clamav. Having these tools at hand together w/ a good mailserver S/W like postfix, things are to be tuned in a very efficient way. Postfix hardening already brings quite some mechanisms. Using block- and blacklists is another approach which should not be used from the start on, because the spamfilter might not get trained if nearly no mail reaches the server itself. Imagine a non-trained spamfilter if the blacklist is unreachable. In my eyes, it is crucial to closely survey the situation of spam occurence and scores as a basis to create statistics. This again serves as a base for the definitition of the SPAM tag- and kill-levels, which should normally be set to 4.5 and 16 respectively. When analyzing the score distribution, we can identify a peak which should ideally be not too far left of the kill-level, or in contrast, on a well trained system, even right of the kill level, which means that already most mails get discarded. But what about all the NDRs? We can query the internal LDAP server to get the information we need. So, at first, installing ldapvi makes sense. Then, if we have valid credentials, we can simply do
ldapvi -b "ou=xxx, dc=xxx, dc=xx" -h v.w.x.y -D you@yourdomain > ALLDATA.ldap.txt
and then get a list of valid e-mail adresses out of this by doing
grep "yourdomain" ALLDATA.ldap.txt | grep mail | cut -d ":" -f 2 | cut -d " " -f 2 | tee -a /etc/postfix/relay_recipients
This list of emails is the base for the definition of a relay_recipients file, which will accept mails only if the corresponding adresses are also found in the LDAP directory. This eliminates the complete problematic NDR situation previously found, b/c the internal mailserver normally never has to state that mail is sent to a user w/o a corresponding mail: entry in the LDAP directory dump. In a static LDAP scenario, if an account gets removed or the internal mailserver itself has problems, would produce a NDR. One could query the LDAP DB once a month and recreate the relay_recipients file from that, but in general I guess manually changing/adding/removing users in the file makes more sense, and again, we do not want to depend too heavy on other servers.  To use the relay_recipients file, the following parameter should be set in the postfix config
d1g@isp:~$ sudo postconf relay_recipient_maps
relay_recipient_maps = hash:/etc/postfix/relay_recipients
The maintenance of the valid recpients - if needed at all - could optionally be done by the local admin if we have a (cron-)skript in place that rebuilds the DB regularly by doing e.g.
postmap -v /etc/postfix/relay_recipients
postmap: name_mask: all
postmap: inet_addr_local: configured 2 IPv4 addresses
postmap: inet_addr_local: configured 2 IPv6 addresses
postmap: set_eugid: euid 1000 egid 1000
postmap: open hash relay_recipients
postmap: Compiled against Berkeley DB: 5.3.28?
postmap: Run-time linked against Berkeley DB: 5.3.28?
In this static LDAP setup, the mailserver accepts mail only for previously defined valid users, and simply by this already rejects a very high percentage of all spam mails, especially all that produced the problematic NDRs. Improvement v2: Dynamic Lookups While solely relying on dynamic lookups might get you into trouble as described above, having dynamic lookups only in cases where the recipient is not found statically makes sense, and also brings any change made in the LDAP directory immediately to the outside world. In order to enable the dynamic lookup feature, we have to first install postfix-ldap, and then define
relay_recipient_maps = hash:/etc/postfix/relay_recipients, ldap:/etc/postfix/ldap-aliases.cf
The tricky part is the ldap-aliases.cf file itself, as in our case it had to contain
server_host = x.x.x.x
search_base = ou=xxx, dc=xxx, dc=xx
version = 3
timeout = 10
leaf_result_attribute = mail
bind_dn = user@domain
bind_pw = userpassword
query_filter = (mail=%s) 
result_attribute = mail, addressToForward
Afterwards, restart postfix, and/or optionally test the setup by doing
postmap -vq user@domain ldap:/etc/postfix/ldap-aliases.cf
So, we now have both a static and dynamic mechanism in place, which makes the system rather failsafe and ready for immediate LDAP directory change propagation. Last but not least: Keep in mind - if a valid user is listed in LDAP, but the corresponding mailbox is not available for whatever reason on the local mailserver, non-delivery receipts (NDR) will be sent out! Improvement v3: Query Proxy Addresses In some cases, the ldap query has to be adjusted to the given scenario:
server_host = x.x.x.x
search_base = ou=xxx, dc=xxx, dc=xx
version = 3
timeout = 10
leaf_result_attribute = mail
bind_dn = user@domain
bind_pw = userpassword
query_filter = (proxyAddresses=smtp:%s) 
result_attribute = mail, addressToForward
After a restart of postfix, the mechanism works as intended.

OpenPGP Key Recreation and Revocation

Despites nearly having forgotten to blog about it, time has come to get myself a stronger OpenPGP keypair. But what about the folks I already established a secure connection with using the old key 0x800e21f5 and what about the rest of the internet? It's not as complic as one might think. 1. Key Creation Key creation is very simple if you use GnuPG on Linux:
0x220b:~$ gpg --gen-key
You can leave the default options (RSA/RSA,  4096bit, never expires) until it comes to name, e-mail and comment, where you should fill in your personal data associated w/ the use of the key. In most cases, one e-mail address is not enough, but you can just add one like this:
0x220b:~$ gpg --edit-key 6C71D217
gpg> showpref
[uneingeschränkt] (1). Peter Ohm (NetworkSEC/NWSEC) <p.ohm@networksec.de>
 Verschlü.: AES256, AES192, AES, CAST5, 3DES
 Digest: SHA256, SHA1, SHA384, SHA512, SHA224
 Komprimierung: ZLIB, BZIP2, ZIP, nicht komprimiert
 Eigenschaften: MDC, Keyserver no-modify
gpg> adduid
Now enter the other e-mail addy and a relevant comment if you wish. So, we now got a fresh key - but what about the old one(s)? At first, we should use them to sign the new one:
0x220b:~$ gpg --default-key 800e21f5 --sign-key 6C71D217
0x220b:~$ gpg --default-key 7BB7A759 --sign-key 6C71D217
and then finally give everybody access to our new public key by:
0x220b:~$ gpg --keyserver pgp.mit.edu --send-key 6C71D217
2. Key Revocation Okay, now everybody must be able to know that the old keys are not used any longer. This can easily be achieved by first creating a revocation certificate for each of them, then importing that into the own keyring and finally exporting the revoked keys to the internet. Lets do it w/ a small shell skript and gpg2:
#!/bin/bash
for i in 7bb7a759 800e21f5
do
 gpg2 --output revoke.asc --gen-revoke $i
 gpg2 --import revoke.asc 
 gpg2 --keyserver pgp.mit.edu --send-keys $i
done

3. More Key Distribution I also recommend to send everybody you already set up an encrypted communications channel with your new public key as they will be the only ones possibly using the old key material (most OpenPGP clients refuse to use revoked keys for encryption) and as it's especially them who would need to be informed about any changes. So, even if anybody interested in establishing a secure communications channel did not yet get your new public key, all that remains to be done is:
gpg --keyserver pgp.mit.edu --recv-keys 6C71D217
gpg --keyserver pgp.mit.edu --refresh-keys
...and don't forget to attach your own public key if its a first-time contact.

TLS Hardening: Postfix

Dealing w/ TLS in postfix is straightforward, but there are too many options to list them all. As a prerequisite, a researcher maybe wants to be able to look at TLS information in more detail - in the logs of the server, as well as in the header of the mail itself.

This can be achieved by setting
smtpd_tls_received_header = yes
smtpd_tls_loglevel = 1
smtp_tls_loglevel = 1
in /etc/postfix/main.cf. Furthermore, if not already set
smtpd_use_tls = yes
smtp_use_tls = yes
smtpd_tls_security_level = may
lmtp_tls_mandatory_ciphers = high
smtp_tls_mandatory_ciphers = high
smtpd_tls_mandatory_ciphers = high
smtp_tls_security_level = may
smtpd_tls_mandatory_exclude_ciphers = MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL
smtpd_tls_exclude_ciphers = MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3
smtpd_tls_protocols = !SSLv2, !SSLv3
also makes sense, but still gives us a C+ rating. Folks at Qualys recommend
tls_export_cipherlist = aNULL:-aNULL:ALL:+RC4:@STRENGTH
tls_high_cipherlist = aNULL:-aNULL:ALL:!EXPORT:!LOW:!MEDIUM:+RC4:@STRENGTH
tls_legacy_public_key_fingerprints = no
tls_low_cipherlist = aNULL:-aNULL:ALL:!EXPORT:+RC4:@STRENGTH
tls_medium_cipherlist = aNULL:-aNULL:ALL:!EXPORT:!LOW:+RC4:@STRENGTH
tls_null_cipherlist = eNULL:!aNULL
but this also gives us C+. Doing more research, we finally get a B - still w/ a self-signed cert - by using
tls_export_cipherlist = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!MEDIUM:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
tls_high_cipherlist = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!MEDIUM:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
tls_legacy_public_key_fingerprints = no
tls_low_cipherlist = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!MEDIUM:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
tls_medium_cipherlist = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!MEDIUM:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
tls_null_cipherlist = eNULL:!aNULL
Together w/ a letsencrypt cert, we finally receive an A grade and full PCI-DSS compliance. Mission accomplished! P.S.: If you are interested in a complete TLSv1.2 cipherlist, just issue
openssl ciphers TLSv1.2
Addendum: If you are on the lookout for governments deploying weak encryption, then this is for you:
May 5 01:44:08 isp postfix/smtpd[24611]: connect from correo.palmira.gov.co[190.144.251.105]
 May 5 01:44:09 isp postfix/smtpd[24611]: SSL_accept error from correo.palmira.gov.co[190.144.251.105]: -1
 May 5 01:44:09 isp postfix/smtpd[24611]: warning: TLS library problem: error:1408A0C1:SSL routines:SSL3_GET_CLIENT_HELLO:no shared cipher:s3_srvr.c:1440:
 May 5 01:44:09 isp postfix/smtpd[24611]: lost connection after STARTTLS from correo.palmira.gov.co[190.144.251.105]
 May 5 01:44:09 isp postfix/smtpd[24611]: disconnect from correo.palmira.gov.co[190.144.251.105]
A small snippet to look for more of that kind:
#!/bin/bash
#
# See which clients try to connect w/ old and insecure SSL
#
grep "accept error" $1 | cut -d ":" -f 4 | sort | uniq
exit 0