Hardening Debian Against Common Surveillance and Security Threats v18.11.10

Table of Contents Introduction...... 2 Design Philosophy...... 3 Threat Model...... 5 Download and Verify Installation Image Signatures...... 8 Write Verified Image to Installation Media...... 12 Protect Motherboard Firmware...... 13 Operating System Installation Configuration...... 15 Default Deny Firewall...... 18 Kernel and Network Hardening...... 21 Routing Package Management through Tor...... 24 Minimizing your Software Attack Surface...... 26 Routing DNS Traffic through Tor...... 27 Blacklisting Domains with Hosts...... 30 Configuring Mandatory Access Control...... 33 Firewalling USB Interfaces...... 38 Locking the Bootloader...... 42 Automating Security Scans...... 45 Running Security Audits...... 51 Configuring a System-Wide VPN...... 55 Hardening the Web Browser...... 60 Keeping Computing Local...... 72 File and Backup Encryption...... 76 Testing Your Configuration...... 81 Closing Thoughts...... 87 Useful Links and References...... 89

1 Introduction

What happened to the Internet? We were sold a promise that it would be a space entrusted to freely explore information and to be a bastion of digital liberty. So why does it seem as though every online entity is maliciously trying to harvest our information and control every aspect of our digital presence? Why are industries and governments so driven to weaponize our own data against us? When I think about the state of the modern web, I cannot help but to recall an old comic shared circa 2008:

It depicts the web as a hellscape requiring a metaphorical armored tank of a web browser simply to interact with any site. The image of a battlefield certainly embellishes the situation although it is not entirely inaccurate. In fact, in 2016 NATO ha d officially declared the Internet to be a warzone. I would not consider one to be foolish for treating every bit of information entering their computer as a potential assault on their digital well-being. That is why, in this walk-through, we will look at strengthening every weak point and plugging as many holes as practicable with a Debian installation. We want to ensure that only you are the absolute authority who has the final say as to what does and does not get to access your computer.

2 Design Philosophy

The aim of this document is to deliver instructions in a dense and hopefully easy to understand manner. The instructions are presented in a CLI terminal format whenever applicable. Followed beginning to end, in order, this document outlines a way to configure a trustworthy, strong foundation on which to construct your own secure and privacy oriented computer.

This document assumes the reader has already heeded introductory guides. Resources which emphasize migrating away from products and services of known surveillance state collaborators. Here, we focus instead on changes and additions which can be made to your security strategy. While many of these methods are described as applied to Debian, most should readily translate to Debian derivatives and other GNU/ distributions.

3 We assume no trust beyond the NIC. Some approaches to securing your privacy enjoy the benefits provided by router hardening. Here, the local machine is considered our only sanctuary. This also focuses on building out a system from a fresh installation, although nothing should prevent readers from individually adopting some of the outlined practices on an existing installation.

Some of the details surrounding the instructions presented in this document are left intentionally vague. This is done for a few reasons.

1) To keep the guide general and applicable beyond strictly Debian GNU/Linux. 2) To encourage readers to fine tune these configurations to suit their own specific needs. 3) To encourage readers to research beyond what is presented in this guide.

We will focus heavily on free-libre software since, w hile free software does not guarantee security, it is a hard prerequisite to constructing a trusted computing environment. Indeed, there exists commercial proprietary security software available for GNU/Linux however we cannot place trust in a program to do only what it claims if nobody is allowed to truly investigate its inner workings.

4 Threat Model

This document addresses the threat faced by most of us, automated dragnet surveillance. Whistle blowers, activists and political dissidents may encounter targeted advanced persistent threats. The other 99.9% of us should instead be concerned with wider pervasive monitoring. We will also cover protecting your system against snooping roommates/coworkers/etc who may have physical access to your hardware. And while the methods outlined here cannot guarantee protection to dissidents, whistle blowers or journalists, they may still be helpful as smaller components in a related security strategy.

What we are attempting to defend against:

1) Bulk data collection. 2) Surveillance capitalism. 3) Common adware, malware and spyware. 4) “Evil maid” unauthorized access.

What we are not attempting to defend against:

1) Targeted law enforcement operations. 2) Targeted nation state operations. 3) Other advanced persistent threats.

Below is a rough conceptual model of what we will be building. It enforces most input (abstracted for network packets, web objects, USB peripherals, and others) to traverse a 3 ½ tiered defensive structure.

Line one represents the default deny firewalls which will ultimately make up our outer perimeter. Much of it permits incoming data only at the user’s explicit discretion. And, since all users are prone to error, line two provides a second opinion based on pre-established blacklists by third parties. If

5 the inbound data matches any of these items it will be automatically rejected, unless otherwise authorized by the user.

Line three depicts a layer of various confinement strategies which restrict the potential damage that can be unleashed by a malicious program which may have evaded lines one and two. Line three, and the system as a whole, will be monitored by a number of malware and intrusion detection programs. The user is responsible for the remediation of any true positives that are discovered.

The entire system, as well as incoming and outgoing data, will be encapsulated in a layer of encryption as much as is practicable. And, in an effort to truly limit outgoing data as much as possible, any processing that can be done inside of the perimeter will be done locally.

6 7 Download and Verify Installation Image Signatures

To ensure that the installation image has not been tampered with or modified between you and the packagers, we want to check published hash sums against our own hash sum of the image. Download the Debian netinst ISO (substituting the latest release version) for your architecture and its hash sums: wget --https-only --no-cookies https://cdimage.debian.org/debian- cd/current/amd64/iso-cd/debian-9. 5.0-amd64-netinst.iso wget --https-only --no-cookies https://cdimage.debian.org/debian- cd/current/amd64/iso-cd/SHA512SUMS wget --https-only --no-cookies https://cdimage.debian.org/debian- cd/current/amd64/iso-cd/SHA512SUMS. sign

These will need to be checked against the Debian developer keyrings. Install the debian-keyring package:

8 Use gnupg to verify the signing key of the “SHA512SUMS.sign” file downloaded earlier with: gpg --no-default-keyring --keyring /usr/share/keyrings/debian-role-keys.gpg --verify SHA512SUMS.sign

Make sure that gpg reports a good signature and take note of the key fingerprint:

Compare that key fingerprint to a Debian CD Signing Key presented at https://www.debian.org/CD/verify. They should be a match:

9 With that evidence of authenticity, we can reasonably trust SHA512SUM in a hash comparison. Generate a SHA512 hash sum for the downloaded Debian image with:

Compare the newly generated hash to the hash provided by Debian:

There should have been no output if the hashes had matched although there appears to be differentiation between the two files. Checking the contents of SHA512SUMS reveals that it also contains the hashes to other installation images, Mac and XFCE, which we will not be using.

10 If you run the diff command again the two hashes should now match, indicated by the absence of any output:

11 Write Verified Image to Installation Media

With the installation image authenticated, it is now time to write it to a trusted installation media. Wipe any previous data that may still be on the designated installation drive (sdX representing the demonstration drive, in this example):

Create a new FAT32 partition in its place:

Write the trusted installation image to the drive:

This can be accomplished through any number of means, gnome-disk-utility or gparted being GUI tools, it doesn’t really matter.

12 Protect Motherboard Firmware

Before installing any software, make sure that your motherboard firmware is up to date. It is also pertinent to create a global passphrase for your motherboard firmware. We don’t want just anyone to be able to quickly, casually boot from an external drive.

Disable any ports that will go unused such as PCI, SATA or USB.

Restrict the boot order so that only the primary boot drive will boot by default. Completely disable any options for network boot.

13 If you disable any boot splash options that display which keys to access EFI menus, you will have raised the barrier to others booting untrusted media if even only by a small measure. Obviously, your EFI interface will be organized differently depending on the vendor.

Support for secure boot has not yet been implemented in Debian as of the 9 (Stretch) release. Even if it were, we would only want to enable it if the board provides options for users to manage their own keys.

14 Operating System Installation Configuration

To keep the system lean, we use the Debian netinst image which does require a network connection part way through the installer. We will also be encrypting the boot drive in order to prevent your data, at rest, from being tampered with. Boot from the trusted drive created earlier and select “Install”:

Do not connect to any network until prompted. Once asked for a root account password, disable the root account by skipping root password creation. The sudoers file will automatically be configured for the default user this way.

At the partitioning options, select encrypted LVM and follow the passphrase prompts. You will be asked if you would like to create separate partitions for /var, /home and /tmp although this can complicate managing LUKS header backups so I recommend the default single partition option.

15 The installer will take some time to overwrite existing information on the target drive to prevent leaking meta information of the new encrypted volume. Continue through the prompts and make sure to opt out of providing installation data to the package survey.

During tasksel, unselect every option except for standard system utilities. In order to reduce the attack surface, we will try to keep installed packages to a minimum. Any desktop environment we install later will be it’s *-core meta package to address this.

A quick reboot later and you should arrive at the CLI login prompt:

16 The result is something of a blank canvas. We could have elected to include some default options earlier, but that would have left us with many potentially unnecessary components to later secure or to remove completely.

17 Default Deny Firewall

The first thing we want to bring up is the firewall configuration. This will require some basic firewall rules. The Arch Linux W iki provides a great resource for building a simple, stateless firewall. Alternatively, we can utilize the ipkungfu package for a more tailored configuration. For simplicity, we will create a minimal iptables rule set which dynamically rejects all input to ports that are not in use by an existing connection.

To begin with, iptables is totally empty, allowing all traffic in and out freely. This can be seen with:

Lay the foundation for our firewall rules by dropping all traffic coming in or going out: sudo iptables -P INPUT DROP sudo iptables -P OUTPUT DROP sudo iptables -P FORWARD DROP

Next, create rules that allow incoming traffic for established connections and loopback: sudo iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT sudo iptables -A INPUT -i lo -j ACCEPT

18 Follow with rules rejecting malformed packets and any packets which do not match the previous rules: sudo iptables -A INPUT -m conntrack --ctstate INVALID -j DROP sudo iptables -A INPUT -j REJECT --reject-with icmp-proto-unreachable

Lastly, since we trust our local computing environment, allow outgoing traffic on the OUTPUT chain: sudo iptables -A OUTPUT -j ACCEPT

Additional rules can be added as needed. On a more conventional installation, one might want to create a rule allowing ICMP packets so that their system can receive and respond to network pings. Since we will be blocking ping requests, we can safely ignore ICMP.

While most users may still find themselves connected to IPv4 networks, we should not become complacent and rely on the absence of IPv6 support to prevent unwanted IPv6 traffic. Even though many networks, especially home networks, are IPv4 you may still find yourself connecting to IPv6 enabled networks. IPv6 iptables looks nearly identical with one major exception being ip6tables. This means we can create an almost identical rule set to the IPv4 rules shown above: sudo ip6tables -P INPUT DROP sudo ip6tables -P OUTPUT DROP sudo ip6tables -P FORWARD DROP sudo ip6tables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT sudo ip6tables -A INPUT -i lo -j ACCEPT sudo ip6tables -A INPUT -m conntrack --ctstate INVALID -j DROP

19 sudo ip6tables -A INPUT -j REJECT --reject-with icmp6-port-unreachable sudo ip6tables -A OUTPUT -j ACCEPT

Once these rules are active (you can check with ip6tables -nvL) the rules need to be configured to remain in effect even after a reboot. A convenient way to do this is to install the iptables- persistent package sudo apt-get install iptables-persistent. After installing, iptables- persistent will prompt whether you want to save the current active rule set. If you later decide to modify any rules and want to ensure that those changes become persistent, simply invoke that prompt again by running: sudo dpkg-reconfigure iptables-persistent

This is by no means an exhaustive firewall. You may wish to restrict outgoing traffic to just the ports used by your programs, only exceptionally poking holes which will be reciprocated by the inbound connection tracking. Most users who opt for this method will want to allow outgoing traffic though ports 80 and 443 at a bare minimum.

20 Kernel and Network Hardening

There are several parameters in the sysctl.conf which can be hardened affecting packet handling and logging. It is here that we ultimately reject all ICMP echo requests, regardless of whether the firewall accepts ping requests. Edit to uncomment or add some settings to sysctl.conf: sudo vi /etc/sysctl.conf

Some brief details on what each of the selected options affects:

Attempt to hide from unsophisticated scans by ignoring ping requests. "net.ipv4.icmp_echo_ignore_all=1"

Do not accept ICMP redirects, preventing man-in-the-middle attacks. "net.ipv4.conf.all.accept_redirects=0" "net.ipv6.conf.all.accept_redirects=0" "net.ipv4.conf.default.accept_redirects=0" "net.ipv6.conf.default.accept_redirects=0"

21 Do not send ICMP redirects since this is not a router. "net.ipv4.conf.all.send_redirects=0"

Do not accept IP source route packets since this is not a router. "net.ipv4.conf.all.accept_source_route=0" "net.ipv6.conf.all.accept_source_route=0" "net.ipv4.conf.default.accept_source_route=0"

Log packets with incorrect or unroutable source addresses. "net.ipv4.conf.all.log_martians=1" "net.ipv4.conf.default.log_martians=1"

Protect against IP spoofing by checking if a packet is routable through the interface it arrived on. "net.ipv4.conf.all.rp_filter=1"

Prevent network observers from deducing uptime and patch schedules. “net.ipv4.tcp_timestamps=0"

Enable bad error message protection. "net.ipv4.icmp_ignore_bogus_error_responses=1"

Hide kernel pointers from unprivileged users. "kernel.kptr_restrict=2"

Disable responding to SysRq key combination, restricting access to extra console abilities (should only ever be needed for troubleshooting). "kernel.sysrq=0"

Disable logging core dump files. "kernel.core_uses_pid=1"

22 Once you have finished making edits, apply the changes with:

You can always check the settings with: cat /etc/sysctl.conf

23 Routing Package Management through Tor

Even though Apt verifies package signatures, because Apt does not use encryption, it is still possible for any observer between you and the repository to determine which packages are being installed to your system. One of the ways we can address this is to point Apt toward a source that supports HTTPS and configure apt-transport-https. However, there is a more robust solution. The Debian team supply *.onion addresses for package management through Tor. Apt will be slower but this will not only conceal the content of your updates but the server providing them will be unable to discern to whom they are being sent.

We can start configuring this with installing apt-transport-tor: sudo apt-get install apt-transport-tor

And redirect addresses /etc/apt/sources.list as follows: deb tor+http://vwakviie2ienjx6t.onion/debian stretch main deb tor+http://sgvtcaew4bxjd7ln.onion/debian-security stretch/updates main deb tor+http://vwakviie2ienjx6t.onion/debian stretch-updates main Optional if you want to use backports: deb tor+http://vwakviie2ienjx6t.onion/debian stretch-backports main

24 Once you have finished editing /etc/apt/sources.list run a sudo apt-get update to confirm that the addresses are correct and to refresh your cache with the new sources:

Do note that updating and downloading packages will be notably slower through Tor but the privacy and security trade off is well worth the inconvenience. Only after this should you proceed to install the bulk of your packages.

25 Minimizing your Software Attack Surface

By installing only what you absolutely need, you minimize potential attack surface. Apt makes it very easy to do this in a single command, simply by clustering the desired packages together at the end. More savvy users will have probably already written scripts to accomplish this. To install your software selection: sudo apt-get install package1 package2 package3 package4 etc

Our demonstration system will only receive cinnamon-core (a minimal meta package for the Cinnamon desktop), libreoffice, file-roller, firefox-esr, gedit and synaptic. Programs not uncommon in the arsenal of the average user.

Again, the download will take a bit longer than usual as packages are coming in through the Tor network.

One side note, depending on your choice of desktop environment, is that you may want to set your network interface to “managed” for network-manager-gnome. Edit /etc/NetworkManager/NetworkManager.conf and set “managed=false” to “managed=true”:

After your packages have finished installing, the system should be rebooted, sudo reboot.

26 Routing DNS Traffic through Tor

DNS is one of the more fundamentally flawed aspects of networked computing privacy. DNS requests are unencrypted and they cause you to rely on third parties to handle resolving domain names. That is, somebody else gets to see almost everything you attempt to establish a connection to. All too often that is your ISP or Google. Sure, you can change your DNS service to another resolver such as 1.1.1.1 (not actually all that private) or any of the OpenNIC or DNS.WATCH resolvers. But this still leaves your unencrypted requests free to be read by your network and ISP. The more technically inclined have tried taking it a step further with DNS CRYPT to encrypt DNS traffic between the resolver and themselves. But the issue remains in that you still only shift the realm of trust over to a third party. How can you be certain they don’t log connections?

That is why we want to use tor as a band-aid solution. DNS requests will be both encrypted and passed through a trustless system where the resolver cannot know who they are resolving domains for. It will be slower at over 350 milliseconds to resolve versus a more conventional 50 millisecond average. Our goal isn’t speed, however. tor should already be installed from earlier when you set up apt-transport-tor but in case you hadn’t (or you just skipped to this section) install tor and torsocks.

Edit the torrc file created by tor during installation: sudo vi /etc/tor/torrc

In network-manager-gnome toggle automatic DNS off and add a custom DNS server with the address we used in /etc/tor/torrc:

27 Click “Apply” and if the changes do not take effect immediately you may have to restart the tor and network manager services:

Or without relying on network-manager, we can also go straight to /etc/resolv.conf: sudo vi /etc/resolve.conf

For the sake of completeness, for DEs not using gnome-network-manager, this similarly can be accomplished in wicd. Uncheck “Use global DNS servers”, check “Use Static DNS” and provide the address we configured.

28 Sending DNS requests through Tor only applies for the one network you have configured. If your machine is a portable or mobile computer that will be connecting to various networks, you will need to configure this same DNS setting for every new network connection.

Once your are finished configuring DNS through Tor, you can test and verify that it is working properly with many of the DNS leak testing sites available. On our demonstration machine you can see that dns-leak.com already reports resolvers in use by the current Tor exit node.

The exit node that your connection uses rotates on ten minute intervals. If domain names are slow to resolve or not resolving at all, simply wait a few minutes or force a node change by running: sudo systemctl restart tor

29 Blacklisting Domains with Hosts

In addition to our firewall whitelist strategy (default deny), an additional blacklist approach (default allow) can be employed using your system’s hosts file. Several sources maintain blocklists geared toward blocking specific categories of domains from being resolved globally. Most of which are focused on blocking known ad and malware domains although there are also resources for blocking domains operated by various commercial and government entities if you are so inclined.

This can be done as simply as copying and pasting the blocklist contents into your /etc/hosts file. Better still, there are automated scripts available which can be scheduled to run on cron to automatically update your hosts blocklist. h ostsblock or hosts - blocking can be deployed to maintain lists of your choosing while culling duplicate entries.

Our demonstration machine will receive a single blocklist by copying it into the existing hosts file. First it is a good idea to backup your hosts file before making any changes to it. Copy it to a directory that you will remember: sudo cat /etc/hosts > /home/user/Documents/hosts-backup

Before making any changes to it, your hosts file will look something like this:

As an example, we can add the blocklist provided by yoyo.org to block known adserver domains.

Gain privilege to write to /etc/hosts with sudo -i. Then download and append the blocklist to /etc/hosts:

30 If you issue cat /etc/hosts again you should now see a long list of entries redirecting ad domains to 127.0.0.1:

Now let’s verify that everything is working as intended by trying to access a blocked domain. Pinging any of these blacklisted domains should result in 100% packet loss. Alternatively, we can try to reach one of these sites through the web browser.

As you can see, it fails to connect to “zenzuu.com” as expected. Not only does this work for entire sites but also for individual page elements on other sites which are served by these domains.

31 However, I still recommend using an automated script to maintain hosts blocklists as manually editing /ect/hosts can be cumbersome and prone to error.

When possible, favor lists or automated scripts that redirect to 0.0.0.0 since it is faster and does not establish any routable connection, even to the local machine.

32 Configuring Mandatory Access Control

In the event that a networked program you run ends up executing malicious code, the potential for it to reach and damage other parts of the system can be mitigated by confining that program. AppArmor is an implementation of Mandatory Access Control which restricts programs to only the paths and ports that they need in order to function based on profiles.

As of Linux kernel 4.17, Debian enables mandatory access control apparmor by default. For Debian 9 Stretch, without any backported or custom kernels, you will need some additional GRUB settings to get apparmor up and running. Install apparmor, apparmor-utils, apparmor-profiles and apparmor-profiles-extra.

If you are running a kernel prior to 4.17, edit your /etc/default/grub and add “apparmor=1 security=apparmor” to the line GRUB_CMDLINE_LINUX_DEFAULT after “quiet”: sudo vi /etc/default/grub

Run sudo update-grub to apply the changes. If you check the output of sudo aa-status you will notice that apparmor is still not loaded or active. Reboot for apparmor to load:

33 Now apparmor should be fully loaded with default profiles applied. This can be verified by running sudo aa-status again:

Some programs have profiles but only operate in complain mode wherein they will just log actions which are denied into apparmor’s log. You can switch a program to enforcement mode with: sudo aa-enforce /usr/bin/$PROGRAM

Or to return a program to complain mode: sudo aa-complain /usr/bin/$PROGRAM

The apparmor-profiles and apparmor-profiles-extra packages only provide profiles for a limited number of programs. You may also want learn how to define your own apparmor profiles in order

34 to take full advantage of mandatory access control. Using the archive manager file-roller as an example, create a profile with “aa-genprof”:

You will be prompted whether or not to scan the system log for AppArmor events or finish and exit. Press “S” for scan.

Open the file-roller program separately and run through as many of its functions is you would normally use. This will give the profiler a chance to see everything that file-roller needs to access in order to work.

Close the program when you have finished and return to the profiler. It will ask you whether to allow or deny access to various components based on the actions seen in the system log scan. Choosing “(A)llow” and “(I)nheret” will generally work until you are confident that you know what you are doing.

35 When you reach the end, select “(S)ave Changes” and “(F)inish” to finish generating the archive manager profile. Set the new profile to enforce mode.

The process of apparmor profile creation can take a bit of trial and error. If all went well, testing the program as we did before should reveal that everything still works as intended.

Before creating any profiles, it is a good idea to check /etc/apparmor.d/ and /usr/share/doc/apparmor-profiles/ to see if a profile or template already exists for your target. A good rule of thumb is that if the program is listening on the network, it should probably be protected by

36 an apparmor profile. The best use case for this might be for Firefox or any other web browser you will be running.

37 Firewalling USB Interfaces

We want to prevent USB devices from interacting without explicit authorization. The command line utility usbguard is useful for this in that it acts as a USB firewall capable of denying USB device connectivity based on several characteristics. Some example usbguard rules explicitly deny devices which suspiciously advertise both the presence of storage and a keyboard interface. But it can also be configured to whitelist known good devices which you trust. And while the standard installation, requiring a password for actions, is more secure a complementary GUI applet usbguard-applet-qt is available to handle operations by mouse input.

Aside from rejecting untrusted USB devices, usbguard will effectively prevent unauthorized USB I/O to your system when it is locked, suspended or hibernated while you are away. And, if you elect to sacrifice some additional convenience by forgoing the usbguard-applet-qt GUI, others will not be able to interact with your system’s USB interfaces even if your device is unlocked.

Install it with sudo apt-get install usbguard and note that due to a bug you may need to start the usbguard service three times after first installing it before successfully being brought up using sudo systemctl start usbguard.service.

Create a default rule set for usbguard to load based on the USB interfaces already present on your system at the moment. Entries can be manually added or removed later.

38 Edit the newly created /etc/usbguard/rules.conf so we can append some additional rules:

These additional rules, provided as examples of the kinds of device blacklisting you can do with usbguard, reject USB devices which suspiciously have both storage and HID/keyboard. Now, with usbguard installed and configured, any time you plug in a new USB device it will first need to be authenticated before it can interact with the rest of your system. The CLI way to do this is by identifying the blocked device with sudo usbguard list-devices -b and then, after checking its characteristics, allow it with sudo usbguard allow-device $DEVICE_NUMBER.

After clearing the device for access, it should appear to your system just as your desktop environment would have normally handled the USB device according to its configuration.

39 Access is not permanent unless you manually whitelist the device into /etc/usbguard/rules.conf. Once you unmount and remove the USB device, it will need to be authenticated again the next time you plug it in. To allow it permanently, just throw the -p or --permanent switch with allow-device: sudo usbguard allow-device -p $DEVICE_NUMBER

If you don’t want to open a terminal every time you need to interface with a USB peripheral, you can also install usbguard-applet-qt, sudo apt-get install usbguard-applet-qt. Before it can function properly, you will need to append your user to the /etc/usbguard/usbguard-daemon.conf after “IPCAllowUsers=root”.

Then issue sudo systemctl enable usbguard-dbus.service and sudo systemctl start usbguard-dbus.service. Start the usbguard-applet-qt program. There is also an option in its settings to start in the background at login. Now when you plug in a new USB peripheral a countdown window will open prompting you to either allow or deny it.

40 Even though the GUI applet is more convenient, I cannot recommend it wholeheartedly until password authentication has been implemented. Otherwise anyone can simply bypass USB authentication with a simple mouse click from your potentially (you never know) unattended and unlocked computer.

41 Locking the Bootloader

Even though we have elected to encrypt our system drive with LUKS, the boot partition containing grub unfortunately remains unencrypted. While the following will not stop a determined adversary, we can mitigate some of the risk of an unencrypted boot loader by configuring a GRUB password as well as obfuscating GRUB by hiding splash and timer output. To password protect GRUB, issue: grub-mkpasswd-pbkdf2

The password hash you just copied needs to be added to /etc/grub.d/40_custom. Edit with sudo vi /etc/grub.d/40_custom. And append a line setting superusers= to the name of your user and another line assigning the pbkdf2 passphrase to your designated superuser followed by the passphrase hash:

We want the default operating system to boot automatically without having to enter this passphrase every time so edit /etc/grub.d/10_linux and add --unrestricted to the operating system entry:

42 Finally, update grub and reboot:

Now if you try to enter GRUB options at the GRUB splash screen during bootup, you will be greeted with a login and password prompt. But if you don’t press anything, GRUB should automatically proceed to boot your operating system as before.

GRUB can also be hidden, showing no splash screen or instructions before booting the operating system. Edit /etc/default/grub: sudo vi /etc/default/grub

43 Update grub again, sudo update-grub. Now if you reboot, there should only be a momentary black screen before arriving at your LUKS drive decryption prompt. Pressing “Esc” during this black screen should still take you to GRUB options with the password set up earlier.

Again, hiding GRUB splash will not do much to deter a more experienced adversary. Only encrypting your bootloader could truly secure it from tampering. At the very least, your boot process is now quicker and sleeker.

44 Automating Security Scans

While this guide presents several measures to minimize the possibility of intrusion, we do not want to rely on faith that the system has not been compromised. There are several freely licensed malware scanning utilities available in the Debian repositories that we will want to configure to run on a periodic schedule. They only take a few moments to run so it would be unfortunate not to take advantage of these. For detection we can configure , tripwire, or clamav. There is also Detect (LMD) however this program has not yet reached the Debian repositories so if you want to run this tool it will need to be installed manually.

To prepare our machine to search for rootkits, install rkhunter: sudo apt-get install rkhunter

There are some settings in rkhunter’s configuration file which we want to tweak. Edit /etc/rkunter.conf: sudo vi /etc/rkhunter.conf

Set the package manager option to “DPKG”

Tweak the mirrors options so that rkhunter can run updates with the --update switch.

45 You may also need to remove “/bin/false” from the WEB_CMD option if you encounter any errors regarding “relative pathname: /bin/false” when trying to update with the --check switch.

While the operating system is in a known trusted state, this is a good time to run rkhunter checking for updates and taking stock of the current configurations with the --propupd switch.

Run a first full check reporting only warnings with:

46 sudo rkhunter --check --sk --rwo

On our demonstration system, you can see that there are warnings:

We can check these warnings by reading /var/log/rkhunter.log:

rkhunter takes issue with the hidden /etc/.java directory. This is normal for a new installation. rkhunter usually reports a number of false positives until you have had a chance to tweak and whitelist trusted binaries, users/groups and directories. You can allow these common files and directories to be ignored in the configuration file. Edit /etc/rkhunter.conf, sudo vi

/etc/rkhunter.conf:

Now if we run the same check with rkhunter again there should be no warnings. sudo rkhunter

--check --sk --rwo:

47 The first few times you update your software from the repositories, rkhunter will likely complain about some changes. Verify that the files changed are in fact related to the upgrade and/or something you have modified and if rkhunter continues to report warnings you can run it with the –propupd switch again. Just be sure to use that sparingly as it takes your current system configuration as a baseline of trust. rkhunter is a great tool to have but unless you remember to run it occasionally it won’t be entirely useful. This is something that that should be automated with cron or anacron. Create a script that will update and run rkhunter periodically: sudo vi /etc/cron.daily/rkhunter-daily-scan

Set the script as executable: sudo chmod +x /etc/cron.daily/rkhunter-daily-scan rkhunter should now run daily and update /var/log/rkhunter.log. You can check the log at any time with cat /var/log/rkhunter.log | grep Warning. Those more ambitious will probably want to write the script so that it emails reports or automatically notifies the user by some other means.

For virus detection, clamav can come in handy. Install clamav:

48 sudo apt-get install clamav

This grabs a number of related packages, one of which creates a service for updating virus signatures in the background. It will take a few minutes for these signature databases to be updated before clamav is ready to scan anything. If you’re impatient like this author, you might want to opt for manually updating clamav databases with freshclam:

Now, if we want to scan a directory, we can run clamscan with switches to recursively scan nested directories and to only report infected files:

49 Because this is a fresh installation, clamav has understandably found no infected files. As with rkhunter, we want to set up periodic scans with clamav that will write to a log file. Create a script for clamscam: sudo vi /etc/cron.daily/clamav-daily-scan

Unlike this demonstration machine, on any real production system there will likely be directories filled with files that are trusted. ~/Pictures might only contain family photos or locally created images while ~/Downloads is a much more likely candidate for foreign, infected files to appear. Full system scans can take a long time and a daily scan may not be conducive to this so use the --exclude switch to whitelist directories that you generally trust. Make the script executable: sudo chmod +x /etc/cron.daily/clamav-daily-scan

You can check the results of clamscan with the log this script creates: sudo cat /var/log/clamav/clamscan.log | grep Infected

These tools are helpful for detecting issues but, unlike with commercial antimalware, they do not actively remove any rootkits or infected files (although clamav does have options for file deletion). The utilities outlined here serve only to alert you to potential compromise.

50 Running Security Audits

Software maintainers do their best to deliver a baseline of security. Due in part to avoid making decisions for users or forcing strict, inconvenient practices, it is not uncommon for default software configurations to ship with weak default values. That leaves us with a large laundry list of settings to lock down. And since you are likely a lone individual securing these configurations, wouldn’t it be nice to have an automated tool give you a second opinion? Security auditing tools such as , tiger, yasat and lsat can make that happen for auditing GNU/Linux system configurations. We are going to test out lynis on our demonstration machine: sudo apt-get install lynis

Run a full system audit:

The tool will check many different programs and configurations for insecure settings, disabled security features, package manager and DNS functionality, etc.

51 Tests that check out okay will return output in green. Areas where improvements can be made will appear in yellow. Non-applicable items will be presented in white output and critical issues will be presented in red.

At the end of the audit, lynis thoroughly lists warnings to be resolved followed by suggestions for other weak areas. The report assigns a hardening index score at the very end.

Any warnings should be addressed as soon as possible. It might be as simple as outdated packages or you may have a misconfigured firewall. The suggestions are given as take-it-or-leave-it advice but not all of them will necessarily be helpful to an end user laptop or desktop system.

Some common example suggestions lynis may recommend are for you to install needrestart, libpam-tmpdir. or to tweak the default umask.

needrestart will check for daemons that need to be restarted following library upgrades allowing you to selectively choose which ones to restart. It will also alert you to outdated kernels requiring a

52 reboot. libpam-tmpdir mitigates potential symlink attacks against shared /tmp files and enforces isolated temporary directories for each user.

Another recommendation we can follow is to tighten the default umask setting in /etc/login.defs: sudo vi /etc/login.defs

This dictates permissions set for newly created files. 022 allows all users to read and execute the file (permissions would be 755) while 027 only allows group users of that file to read and execute it (permission 750). At the risk of breaking some functionality, you may even want to tighten the umask to 077 so that only root has permissions but this probably goes beyond the practicality of a typical desktop use case.

If we run the security audit again, you will notice that these changes we have implemented are already reflected in the hardening index:

Of course, do not allow yourself to get caught up in simply raising the hardening index score. It is only there as a reference point and many of the suggested actions are really only applicable to

53 servers or otherwise unhelpful to your particular security strategy. Consider the legal banner suggestion: unless you A) have an SSH server up and running, and B) a legal team at the ready, a login banner isn’t going to do us much good other than to artificially inflate lynis’s hardening index.

54 Configuring a System-Wide VPN

Even though we have routed our DNS queries and package management through Tor, our system’s IP address remains outwardly visible at all times. A VPN will not only conceal this address, but is also useful for concealing network activity from your ISP and local network. VPNs are no silver bullet, however. This only shifts our trust over to another entity. A VPN is great when the need arises to A) hide your real external IP address and/or B) conceal network activity from any LAN/WLAN that you are connected to as well as its associated ISP.

Many VPN providers offer dedicated (often proprietary) applications or browser addons to make use of thier service, however, we want to be sure that the entirety of our machine’s network activity globally makes use of the VPN connection so we want to configure it through openvpn. Install openvpn: sudo apt-get install openvpn

Download the .ovpn file usually provided by your VPN service and start an openvpn session pointing it to the .ovpn file with the --config switch:

You may be prompted for login credentials depending on the provider. The connection should then take only a moment to establish. Unlike some proprietary VPN clients, this routes everything (more than just your browser traffic) through the VPN tunnel.

55 A systemd service can be created for this if you would rather have a VPN running all the time, from the moment you boot up. Create a systemd unit file with: sudo vi /lib/systemd/system/VPNconnection.service

And add the following, adjusting for the location of your .ovpn file:

Set the new unit file as read only except for root and then reload systemd daemons: sudo chmod 644 /lib/systemd/system/VPNconnection.service sudo systemctl daemon-reload

Enable the new service and next time you reboot your network traffic should be routed through your VPN by default. sudo systemctl enable VPNconnection.service && sudo systemctl start VPNconnection.service

Or maybe you prefer to integrate VPN functionality within your desktop. This can be easier to manage through your existing gnome-network manager program. Install the openvpn integration framework: sudo apt-get install network-manager-openvpn-gnome

56 Open Network Manager and click “+” to add a new network. Select the VPN option.

Select OpenVPN. Alternatively, some VPN services provide *.ovpn files with pre-configured settings which you can import, in which case select “Import from file”.

Fill in relevant information provided by the VPN service such as passkeys or server addresses and click “Add”.

57 The connection option should now appear in your network manager applet. Simply toggle it on and the connection should take moment to authenticate.

You will be notified when the connection is established and the network icon in your system tray will often change into a lock or similar icon indicating that traffic is passed through a VPN.

This is a good time to test that your IP address actually appears as the VPN’s address and that you are not leaking WebRTC data with VPN testing sites. Do not worry about setting DNS over Tor for your VPN connections. It is actually better to use the VPN’s DNS servers in these instances.

58 Just a note about the choice of example VPN used in this section: I cannot recommend “vpnbook” as a serious solution as their free option most very likely logs and/or sells traffic data as well as the fact that it is terribly slow (for obvious reasons). It was only meant for demonstrative purposes and you will probably want to do research on finding reputable VPN services which can be paid for anonymously (via cash or crypto currency) and which hold to their word with as much evidence as possible. Virtual Private Networks are still a system of trust, after all.

59 Hardening the Web Browser

Up until now we have only focused on globally hardening the operating system, now it is time to specifically look at the browser. The web browser represents a huge attack surface and probably warrants a guide in its own right. We will cover as much as is practical in this section. firefox-esr serves as a good foundation and there are several settings we will want to change which will affect privacy and security.

In about:preferences, change the homepage and default search engine. I recommend startpage.com, an engine which has been endorsed by Edward Snowden that proxies queries anonymously to Google search.

Change the default search engine to any of the recommended privacy conscious search engines. You may also want to disable search suggestions.

60 Disable or remove any plugins that you will not be using which may have shipped with your browser by default.

In about:config, navigate to browser.newtabpage.enabled and set it to “false”. Unless you only run Firefox in private browsing mode, suggested tiles can betray your frequented sites to shared users or physically present observers. Likewise, also set browser.startup.page to “0” for blank page or “1” for the homepage set earlier. Frequent site tiles subconsciously train you to visit a limited set of sites and Mozilla has been caught in the past experimenting with serving ads through suggested tiles.

Disable WebRTC functionality to prevent leaking real address data through WebRTC when using a VPN. Navigate to media.peerconnection.* and set as follows:

61 While in about:config, take the time to go through the settings outlined below, or on any number of the sites reputable for hosting configuration hardening guides such as the one at https://www.privacytools.io/#about_config (which supplied several of the about:config suggestions found here).

There will be some variations depending on who you follow. Since this guide is not exclusively focused on privacy but also on security, I hesitate to recommend disabling safe browsing phishing or malware protections. Privacytools.io recommends their disablement on the basis that they provide your personal search data to Google. However, Firefox only grabs blocklists from Google and checks your URLs against these lists locally. If a URL does need to be sent, since 2011 Firefox began stripping out identifying information meaning safe browsing only sends anonymized, hashed parts of the URL used to check against known malware domains.

Ensure that Firefox always starts in private browsing mode. browser.privatebrowsing.autostart = true

If you elect to use Firefox outside of private browsing mode, consider at least disabling form auto- fill and never store unencrypted session data. browser.formfill.enable = false browser.sessionstore.privacy_level = 2 Or clear browsing history on shutdown. privacy.sanitize.sanitizeOnShutdown = true

Deny Firefox access to cameras and microphones (unless you actually plan to use web video/voice). 0 = Ask, 1 = Allow, 2 = Block. permissions.default.microphone = 2 permissions.default.camera = 2 media.navigator.enabled = false

Prevent performance information (though anonymized) from being reported to Mozilla. datareporting.healthreport.uploadEnabled = false

62 toolkit.telemetry.enabled = false

Request sites which abide by Do Not Track to refrain from tracking you. privacy.donottrackheader.enabled = true privacy.donottrackheader.value = 1

Enforce TLS negotiation through newer versions of TLS (1.2+). security.ssl.require_safe_negotiation = true security.tls.version.min = 3

Disable the long insecure triple-DES cipher (which is itself a band-aid on the much older, since- broken DES algorithm) to prevent client-server TLS negotiation over triple-DES. security.ssl3.rsa_des_ede3_sha = false

Disable the short keylength Diffie-Hellman cipher suites which have proven vulnerable to downgrade attacks. security.ssl3.dhe_rsa_aes_128_sha = false security.ssl3.dhe_rsa_aes_256_sha = false

Turn off pre-loading of page links and their resources. network.prefetch-next = false network.dns.disablePrefetch = true browser.urlbar.speculativeConnect.enabled = false

Sanitize referer header information and behavior. network.http.referer.trimmingPolicy =2 network.http.referer.XOriginPolicy = 2 network.http.referer.XoriginTrimmingPolicy = 2

63 Isolate site-delivered resources to the first party domain. privacy.firstparty.isolate = true

Spoof canvas and other identifiers and set various attributes to masquerade as a more common Firefox ESR. privacy.resistFingerprinting = true

Enforce Firefox’s internal blocklists of known tracking sites. privacy.trackingprotection.enabled = true

Prevent websites from tracking click events. browser.send_pings = false

Block sites from getting battery status of your device. dom.battery.enabled = false

Prevent sites from seeing cut/copy events. dom.event.clipboardevents.enabled = false

Disable geolocation. geo.enabled = false

Disable DRM playback of encrypted content. There is no true way of verifying what any DRM module is (or is not) doing with your computer. media.eme.enabled = false

64 media.gmp-widevinecdm.enabled = false

Disable third party cookies and delete remaining cookies when Firefox is closed. network.cookie.cookieBehavior = 1 network.cookie.lifetimePolicy = 2

Allowing websites to run code directly on your GPU is a security risk. Disable WebGL. webgl.disabled = true

Visual confusion of ITN/Unicode characters can be exploited for phishing attacks. Disable punycode lookalike translations. network.IDN_show_punycode = true

Disable use of W3C’s beacon analytics implementation. beacon.enabled = false

There are many addons available which will greatly enhance browser security. However, it is not enough to simply install addons and forget about them. They all have settings which greatly enhance their effectiveness. For example, “HTTPS Everywhere” will attempt to force secure TLS connections but by default will allow unencrypted connections as a fallback. Since we do not want adversaries to potentially view or modify our plaintext web traffic, we should enable “disallow unencrypted connections”. Like with USBguard, it is detrimental to convenience but invaluable to our overall security strategy.

A web content blocker will be one of the single most powerful addon tools you can deploy. uBlock Origin is an extremely popular blocker that can be configured to essentially perform as a content firewall for your browser. sudo apt-get install xul-ext-ublock-origin or install uBlock Origin from https://addons.mozilla.org/.

65 Out of the box, uBlock Origin will reject all elements originating from the domains on its default blocklists. It also filters pages for blocking ads and malicious content. But uBlock Origin can be configured to do so much more. Enter it’s settings page from about:addons or through uBlock Origin’s settings icon.

In the first settings tab, enable advanced user mode. Take advantage of uBlock Origin’s built in WebRTC disablement and block JavaScript and remote fonts by default.

66 In the Filter Lists tab, expand the categories by clicking the plus icons and check off the additional lists. Most of them should be relevant in most cases except possibly the localized language specific blocklists. Hit “Apply changes” and “Update now”.

Open the uBlock Origin panel again and this time there should be entries in a left-hand pane. These are a visual representation of what page elements are blocked or allowed. We want to enforce a default deny policy so click the right-hand half inside of the first upperhand box for “Global rules: … apply to all sites”.

67 Every column should turn red. From here on out, we want to exceptionally allow elements from trusted or frequent pages by clicking on the middle portion of each element’s box in the right-hand column. To save the changes permanently, click on the lock icon. You now effectively have a browser based firewall.

If, earlier, you omitted enabling the privacy.resistFingerprinting configuration, every site you visit can accurately assess what web browser version, operating system version, and which CPU architecture they were built for through the user agent string advertised by your web browser. In this case, we want to be able to lie about this and report that we are running a system different than what is actually in use.

Install the User-Agent Switcher addon from https://addons.mozilla.org/. Out of the box, it does not change your user agent string at all. You can manually change it to anything listed in the addon’s panel. Better yet, check off “Enable Random Mode” to set User-Agent Switcher to periodically cycle user agents.

Tighten the interval down to every hour, or every minute and enable as many user agent categories as you are comfortable with presenting as. Be aware that choosing “Bot” may cause some sites to restrict access or to make you jump through Captchas.

68 For every domain and associated cookie that you allow through other blockers such as uBlock Origin there is the potential that those isolated elements are still being used for tracking. There is an addon which can monitor this behavior over time and try to determine if the domain/cookie should be blocked. This tool is the Electronic Frontier Foundation's “Privacy Badger”.

Install the Privacy Badger addon from https://addons.mozilla.org/. Privacy Badger will need to run while you browse for some amount of time before it will begin to discern trackers to block. If you only use private browsing mode, you will need to enable Privacy Badger to learn in private browsing from the settings page.

We want to avoid handling information in plaintext as much as possible. This requires a way to enforce encrypted HTTPS connections. One of the more popular addons that can do this is the Electronic Frontier Foundation’s HTTPS Everywhere.

sudo apt-get install xul-ext-https-everywhere or install HTTPS Everywhere from https://addons.mozilla.org/. It will attempt to negotiate connections over HTTPS which otherwise would have defaulted to insecure HTTP. HTTPS Everywhere falls back to unencrypted pages if no rules are found for that site. We want to prevent this behavior so check off the option to block unencrypted connections.

69 This will not guarantee in absolute that every resource on that page will be delivered through HTTPS. It is possible that third party resources used by that page have no proper TLS/SSL implementation. The left pane of your address bar will indicate if there are still insecure elements being served through that particular site by displaying an orange warning icon.

Many sites share dependencies on third party resources such as JavaScript and JQuery libraries. Normally, your browser would make a connection to a Content Delivery Network (CDN) and accept a download of that object even if it had done so in the past. There is an addon, “Decentraleyes”, which aims to exploit this redundancy so that your browser can hold on to that copy in case another site requests it in the future. This saves time, bandwidth and prevents you browser from even having to make a request for content in the first place, improving privacy.

Install Decentraleyes from https://addons.mozilla.org/ and navigate to its setting page.

Make sure that “Disable link prefetching” is checked and “Strip metadata from allowed requests” is checked. Blocking requests for missing resources may cause some breakage.

70 Of the remaining allowed cookies, we want them to be subjected to removal shortly after leaving a site. This can partially be accomplished in Firefox settings by deleting cookies after closing Firefox. But many of us leave the web browser open for long periods of time (or even all the time). That is why Cookie AutoDelete will be a useful solution.

Install Cookie AutoDelete from https://addons.mozilla.org/. Configure it to delete cookies from sites after leaving the page.

On sites that require cookies to remain signed into an account or to remember settings, you can greylist cookies in the Cookie AutoDelete addon panel. The notifications that cookies have been deleted can become annoying so there is also an option to disable notifications. They do reveal just how many tracking cookies sites will throw at your browser but this information can be viewed in Cookie AutoDelete’s log anyway.

With all of these tweaks and addons in place, Firefox should run relatively silently making as few requests as possible. Trackers and advertisers will have a much more difficult time identifying you with certainty and will not be able to rely on third party tracking networks such as gravatar or googleapis to follow you around the web.

71 Keeping Computing Local

Convenience has trained us to resort to the web to find answers to simple problems. We have become all too comfortable using our search engines as a spell checker or to find mapping routes. These sites might be described SaaSS or “Service as a Software Substitute”. What this habit does, however, is it gives any of those services continual insights and snippets into what we are thinking and what things we are working on. The tragedy here is that they are often solutions that can be rendered on the local machine with no Internet access at all i.e. in total secrecy.

For example, next time you find yourself looking for words outside of your existing vocabulary, instead of potentially giving away your private thoughts to thesaurus.com consider using a local thesaurus program. Artha is a freely available thesaurus program in Debian’s repositories and operates entirely on the local machine.

72 When you use a third party or your search engine to convert between metric and imperial or any other unit conversion, that service can glean possible personal information from your queries. This is another operation that can and should be done locally. For a CLI way there is GNU Units, sudo apt-get install units and convert between different standards by:

Or for a graphical way there is gnome-calculator which you likely already have installed, depending on your choice of desktop. Set it to advanced mode to reveal unit conversion options.

When you translate text, Microsoft, Google or whomever is doing the translating for you gains insight into the things that you are communicating, researching or simply thinking about. This can be averted by using an offline text translator. Install apertium, sudo apt-get install apertium as well as any package specific to the languages you wish to translate. For example, to translate from English to Spanish install the apertium-en-es package alongside apertium. You can then translate text by piping it through apertium with:

Unfortunately, apertium’s GUI interface, apertium-tolk, has fallen out of development so for now we are stuck using the CLI interface.

73 Many users rely on third party sites to download video or audio from popular video hosting sites. These sites usually also offer options to convert formats or to strip video content so these are features people have come to expect. Luckily, we can do most of this locally with youtube-dl, of course with the exception that you have to download any video content. Despite the name, it actually works across a vast swath of sites.

Because sites frequently change their APIs and interface layouts, the stable youtube-dl package may struggle to reliably grab content. I recommend installing it from the backports repository or through python-pip to get the newer version. Either, sudo apt-get install -t stretch-backports youtube-dl or sudo pip install youtube-dl. You may need to also install the libav-tools package.

Once it is installed, navigate to and copy the web address of whichever video or audio you would like to download. Enter youtube-dl -o “$HOME/Downloads/%(title)s.%(ext)s”

$LINK_TO_VIDEO and paste the link you saved earlier (gnome-terminal makes pasting easy with “Ctrl+Shift+V”). This will download the file into your ~/home/Downloads directory using it’s name and file extension.

The file should appear in the designated folder, as expected.

74 There are so many Services as a Software Substitute that people give up their computing to every day and it would be impossible to cover all of their local alternatives. To briefly mention some additional resources:

 Generating QR codes: qrencode  Timer/stopwatch: stopwatch, gnome-clocks  Synthetic speech/text-to-speech: espeak  Mapping and Routing: marble, gnome-maps (only offline after caching locations)  Generating animated GIFs: ffmpeg & FFMPEG gif script for bash

75 File and Backup Encryption

An important part of any security strategy is in the ability to recover from a compromised system or even from a simple drive failure. To ensure data availability, we want to backup our data to at least three other locations and to do so in an encrypted manner.

It is a good idea to create a LUKS header backup for the main drive. If the header of a LUKS volume or LUKS key-slot becomes damaged all of that volume’s data is lost permanently. The only way to recover from these is to restore from a LUKS header backup. Create one by running cryptsetup luksHeaderBackup replacing the example “/dev/vda5” for the destination of your drive:

Unlike in the example, it would be best to store this backup in any other place than your main drive.

Not only should your LUKS volume headers be backed up but every critical file you own should also safely be copied to at least three other locations to protect against the possibility of data loss.

Your local backup drives should also be encrypted. The CLI way to accomplish this is through cryptsetup substituting your target drive where the example says “/dev/sdb”:

76 Type “YES” and proceed to create the passphrase for this encrypted drive. Then mount and open the drive with a mapping name. You will need to enter the passphrase you just created:

Create a new ext4 volume within the encrypted drive:

You may need to grant ownership of the volume to your default user:

You should now be able to use this drive as usual, without having to worry about encrypting individual files. A LUKS header backup can be created for this drive through the same steps outlined earlier.

77 The LUKS volume can be closed with: sudo cryptsetup luksClose /dev/mapper/testname

Depending on your target desktop environment, mounting this drive any time in the future should open a GUI prompt to unlock the drive. Alternatively, this whole process could have been accomplished entirely through a GUI utility such as gnome-disk-utility.

If you just have a few files that you would like to secure or if you would prefer to keep your data encrypted on a file system readable by other operating systems (NTFS format), you can encrypt them individually with gnupg.

For example, we will encrypt “Finances.txt” with reasonably strong symmetric encryption. Throw the -c switch in gpg to tell it to use a symmetric cipher:

78 This should create an encrypted copy of the file in the same directory denoted by appending the .gpg extension. Remember that the original unencrypted file remains so you may decide to delete the original. If so, use a secure deletion tool such as shred or srm.

It is now safe to copy “Finances.txt.gpg” to unencrypted locations or to send over the network. The recipient will need to know the symmetric passphrase in order to decrypt the file. On their end, the --decrypt switch should be used:

It can alternatively be phrased as: gpg --output /home/user/$DECRYPTED_FILE --decrypt /home/user/$ENCRYPTED_FILE

79 If all went as expected, the output file should be a readable 1:1 copy of the original.

There are a number of different algorithms to choose from although AES256 is fast, especially for large files while still being reasonably secure.

80 Testing Your Configuration

With everything said and done, we want to be able to get an outsider’s perspective of how our system now advertises itself. There are a number of sites that can be used to sample this information.

To test your browser fingerprint and to verify that your user agent is being reported as advertised use the Electronic Frontier Foundation’s Panopticlick. It will inspect various identifiers and behaviors of your browser and report back on a scale of one in every X browsers bearing the same fingerprint as your own.

Whether you’re verifying that your configuration is using your VPN’s provided DNS servers or checking your DNS Tor configuration, sites like dns-leak.com are invaluable. Dns-leak.com uses JavaScript to send a request to fetch dummy resources on their server. When they receive the request, dns-leak.com determines the nameservers in use making that request and reports them back to the client.

81 To verify that clamav is in fact doing its job, we can place the eicar. org’s test signature file somewhere on our system and see if clamscan is able to find and report it. The file can be downloaded at eicar.org or simply copy the signature line into a text file. Note that the eicar test is not actually harmful.

Or you can manually run clamscan on the file although this won’t help you to determine whether or not your automated scan script is working.

82 Leaving it in place should eventually trigger the automated scan to report the test signature file.

The network firewall can also be tested by running grc.com’s Shields Up!! test. It will only scan the first 1056 ports officially designated by IANA and report them if it receives any kind of response from that port. Ideally, your system should send out no network packets of any kind in response to Shields Up!! probing.

83 What IsMyIP.com is one example of the many sites which can be used to show you your public IP address. The report should additionally inform you as to whether your IP is an IPv4 address or an IPv6 address, approximate location and the host Internet service provider.

For testing WebRTC behavior we can easily use privacytools.io’s own WebRTC IP L eak T est. You don’t actually need to be connected through a VPN or Tor in order to test this. Unless you intend to do browser video conferencing, WebRTC should always be completely disabled. WebRTC IP Leak Test attempts to obtain the client’s local and public IP addresses through JavaScript.

84 You can briefly audit your browser’s TLS client from the outside, using SSL (TLS) testing sites like the one at How’s My SSL?. It will return a list of all your browser’s currently supported cipher suites and will score the various ciphers and versionings against known issues, assigning them a score of “Good”, “Needs Improvement” or “Bad”. Since we disabled usage of old TLS and triple- DES earlier, testing with “How’s My SSL?” should not return any bad results.

These resources are mainly here to give some indication as to whether our strategies are working as intended. None of these type of resources are 100% fool proof and a number of similar sites will

85 attempt to use various tricks in order to “cheat”. For example avoid “whatsmyip.org” since, with JavaScript disabled, they falsely report random and unrelated IP addresses to visitors.

86 Closing Thoughts

Now sit back and prepare for a number of sites to break, become unrecognizable, accuse you of being a bot or put you through a gauntlet of Captchas. The important thing is that now you can discrminatingly allow their content in as you see fit. If a site couldn’t be bothered to encrypt their connection with a good TLS cipher, then they didn’t deserve your attention to begin with. Your computer is now equipped with the means to force them to comply to your standards or be denied all access.

And while it may seem stupid and obvious, make sure to configure a screen lock timer on whichever desktop environment you use. Hibernating or shutting down after a period of inactivity would also compliment a screen lock timer. A system that is offline is unlikely to be exposed to a network attack.

Also consider configuring automatic updating. Be it simple notifications that there are upgrades available or fully automated system upgrades with tools such as unattended-upgrades. Always update early and often.

This has been far from an exhaustive walk-through and there is still so much more you can do to improve your cyber security and privacy. Those who wish to take control of their data to an entirely different level should probably be looking into (in no particular order) solutions including but not

87 limited to; Tails, Qubes OS, Coreboot/Libreboot, the Tor browser bundle, and other specially crafted tools for freedom and privacy.

If you have followed this whole document or even just selectively implemented a few sections you should now, at the very least, have a strong foundation on which to build and implement your own security enhancements. Perhaps one could build a spin that bites back at trackers by running TrackMeNot, AdNauseam and Privacy Possum? Even if it isn’t perfect, this should place you well and above the average netizen in becoming resilient against surveillance and common cyber threats. Consider an old joke: “You don’t have to run faster than the bear to get away. You just have to run faster than the guy next to you.”

88 Useful Links and References

Resources mentioned here are noted either for their extremely helpful information in constructing this guide or simply because the author found them memorable.

 The Paranoid !# Security Guide  Securing Debian Manual  Gentoo Wiki Security Handbook  Arch Linux Security Wiki  Online-Privacy-Test-Resource-List (web.archive.org)  Dig Deeper Software Privacy  DuckDuckGo Linux Privacy Tips  The Hated One  P rism - B reak.org GNU/Linux  P rivacy T ools.io  That One Privacy Site  Rex Kneisley  EFF Surveillance Self-Defense  P roprietary S oftware I s O ften M alware  Device and Personal Privacy Technology Roundup  ghacks-user.js Hardening Firefox Privacy, Security and Anti-Fingerprinting

89