I’ve spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:
- “It’s just good security practice.”
- “You need it if you are running a server.”
- “You need it if you don’t trust the other devices on the network.”
- “You need it if you are not behind a NAT.”
- “You need it if you don’t trust the software running on your computer.”
The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you’re doing it – it is essentially a non-answer. #2 is strange – why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router’s NAT at port 80 to open that server’s port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one – what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there’s nothing to access. #4 feels like an extension of #3 – only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don’t know how it works), you don’t want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device’s actions.
If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in. People seem to treat it as if it’s acting like the front door to a house, but this analogy doesn’t make much sense to me – without a house (a service listening on a port), what good is a door?
You’re right. If you don’t open up ports on the machines, you don’t need a firewall to drop the packages to ports that are closed and will drop the packets anyways. So you just need it if your software opens ports that shouldn’t be available to the internet. Or you don’t trust the software to handle things correctly. Or things might change and you or your users install additional software and forget about the consequences.
However, a firewall does other things. For example forwarding traffic. Or in conjunction with fail2ban: blocking people who try to guess ssh passwords and connect to your server multiple times a second.
Edit:
How would the firewall on one device prevent other devices from abusing the rest of the network? Perhaps you misunderstood the original intent of my post. I certainly wouldn’t blame you if that is the case, though – when I made my post I was far too vague in my intent – perhaps I simply didn’t think through my question enough, but the more likely answer is that I simply wasn’t knowledgeable enough on the topic to accurately pose the question that I wanted to ask.
Fair point!
For this, though, the only solution to it would be an application layer firewall like OpenSnitch, correct?
Sure. I’m not exactly sure any more what I was trying to convey. I think I was going for the firewall as a means if perimeter security. Usually devices are just configured to allow access to devices from the same Local Access Network. This is the case for lots of consumer electronics (and some enterprises also rely on securing the perimeter, once you get in their internal network, you can exploit that.) My printer lets everyone print and scan, no password setup required while installing the drivers. The wifi smart plugs I use to turn on and off the mood light in the livingroom also per default accept everyone in the WiFi. And lots of security cameras also have no password on them or people don’t change the default since they’re the only ones able to connect to the home WiFi. This works, since usually there is a Wifi router that connects to the internet and also does NAT, which I’d argue is the same concept as a firewall that discards incoming connections. And while wifi protocols have/had vulnerabilities, it’s fairly uncommon that people go wardriving or close to your house to crack the wifi password. However, since you mentioned mixing devices you trust and devices you don’t trust… That can have bad consequences in a network setup like this. You either do it properly, or you need some other means to secure your stuff. That may be isolating the cheap chinese consumer electronic with god knows which bugs and spying tech from the rest of the network. And/or shielding the devices you can’t set up a password on.
I don’t think you can make an absolute statement in this case. It depends on the scenario, as it always does with security. If you have broken web software with known and unpatched vulnerabilities, a Web Application Firewall might filter out malicious requests. An Application Firewall if other software is susceptible to attacks or might become the attacker itself (I’m not entirely sure what they do.) But you might also be able to use a conventional firewall (or a VPN) to restrict access to that software to trusted users only. For example drop all packets if it’s not you interacting with that piece of software. And you can also combine several measures.
Are you referring to the firewall on the router?
Interesting. I hadn’t heard of this.
As in blocking or restricting their communication with the rest of the lan in the router’s firewall, for example? Or, perhaps, putting them behind their own dedicated firewall (this is probably superfluous to the firewall in the router though).
For clarity’s sake, would you be able to provide an example of how this could be implemented? It’s not immediately clear to me exactly what you are referring to when combining “user” with network related topics.
Yes. At home this will run on your (wifi) router. But the standard rules on that are pretty simple: Discard everything incoming, allow everything outgoing. Companies might have a dedicated machine, something like a pfSense in a server rack at each of their subsidiaries and draw a perimeter line around what they deem fit, the office building, a department, or separate the whole company’s internal network from the internet. (Or a combination of those.) You just have one point at home where two network segments interconnect: your router.
I think it is important to distinguish between this kind of firewall and something that runs on a desktop computer. I’d call that a personal firewall or desktop firewall. It does different things: like detect what kind of network you’re connected to. Enable access when you’re at your workplace but inhibit the Windows network share when you’re at the airport wifi. It adds a bit of protection to the software running on the computer, and can also filter packets from the LAN. And it’s often configured to be easygoing in order not to get in the way of the user. But it is not an independent entity, since it runs on the same machine that it is protecting. If that computer gets compromised for example, so is the personal firewall. A dedicated firewall however runs on a dedicated and secure machine, one where there is no user software installed that could interfere with it. And at a different location, it filters traffic between network segments, so it might be physically at some network interconnect. There are lots of different ways to do it, and people apply things in different ways. Such a firewall might not be able to entirely protect you or stop malicious activity spread within the attached network at all. And of course you need the correct policy and type in the rules that allow people at the company to be able to work, but inhibit everything else. Perfection is more a theoretical concept here and nothing that can be achieved in reality.
Yes, you’d need to separate them from the rest of the network so your router gets in-between of them. Lots of wifi routers can open an additional guest network, or do several independent WiFis. For cables there is VLAN. For example: You configure 4 independent networks, get your computers on one network, your IoT devices on another network, your TV and NAS storage on a third and your guests and visitors on yet another. You tell your router the IoT devices can’t be messed with by guests and they can only connect to their respective update servers on the internet and your smarthome. Your guests can only connect to the internet but not to your other devices or each other. The TV is blocked from sending your behavior tracking data to arbitrary companies, it can only access your NAS and update servers. The devices you trust go on the network that is easygoing with the restrictions. You can make it arbitrarily complex or easy. This would be configured with the firewall of the router.
But an approach like this isn’t perfect by any means. The IoT devices can still mess with each other. Everything is a hassle to set up. And the WiFi is a single point of failure. If there are any security vulnerabilities in the WiFi stack of the router, attackers are probably just as likely to get into the guest wifi as they’d get into your secured wifi. And then the whole setup and separating things was an exercise in futility.
I mean something like: You have a network drive that you use to upload your vacation pictures to in case your camera/phone gets stolen. You can now immediately block everyone from all countries except from France, since you’re traveling there. This would be kind of a crude example but alike what we sometimes do with our credit cards. You can also set up a VPN that connects specifically you to your home-network or services. Your Nextcloud server can’t be reached or hacked from the internet, unless you also have the VPN credentials to connect to it in the first place. You obviously need some means of mapping the concept ‘user’ to something that is distinguishable from a network perspective. If you know in advance what IP addresses you’re going to use to connect, this is easy. If you don’t, you have to use something like a VPN to accomplish that, make just your phone be able to dial in to your home network. (Or compromise, like in the France example.)
How would something like this be normally accomplished? I know that Firewalld has the ability to select a zone based on the connection, but, if I understand correctly, I think this is decided by the Firewalld daemon, rather than the packet filtering firewall itself (e.g. nftables). I don’t think an application layer firewall would be able to differentiate networks, so I don’t think something like OpenSnitch would be able to control this, for example.
What would be a better alternative that you would suggest?
The unfortunate thing about this – and I have encountered this personally – is that some networks may block VPN related traffic. You can take measures to attempt to obfuscate the VPN traffic from the network, but it is still a potential headache that could lock you out of using your service.
Mmh, I probably was way to vague with that. This is done by something like FirewallD or whatever Windows or MacOS uses for this. AFAIK it then uses packet filtering to accomplish the task. Seems FirewallD includes the packet filtering too and not tie into nftables and transfer the filtering task to that. I don’t think OpenSnitch does things like that. I’m really not an expert on firewalls. I could be wrong. If you read the Wikipedia article (which isn’t that good) you’ll see there are at least 3 main types of firewall, probably more sub-types and a plethora of different implementations. Some software does more than one of the things. And everything kinda overlaps. Depending on the use-case you might need more than just one concept like packet-filtering. Or connect different software, for example detect which network was connected to and re-configure the packet filter. Or like fail2ban: read the logfiles with one piece of software and hand the results to the packet filter firewall and ban the hackers.
I don’t really know how the network connection detection is accomplished and manages the firewall. Either something pops up and I click on it, or it doesn’t. My laptop has just 3 ports open, ssh, ipp (printing) and mdns. I haven’t felt the need to address that and care about a firewall on that machine. But I’ve made mistakes. I had MDNS or Bonjour or whatever automatically shows who is on the network and which services they offer activated and it showed some of the Apple devices at work and I didn’t intend to show up in anyone’s chat with my laptop or anything. And at one point I forgot to deactivate a webserver on my laptop. I had used that to design a website and then forgotten about. Everyone in the local networks I’ve connected to in that time could have accessed that and depending on where I was that could have made me mildly embarassed. But no-one did and I eventually deleted the webserver. I think I’ve been living alright without caring about a firewall on my private laptop. I could have prevented that hypothetical scenario by using a firewall that detects where I’m at, but far more embarassing stuff happens to other people. Like people changing their name and then Airdropping silly stuff to people who are just holding a lecture, or Skype popping up while their screen is mirrored to the beamer infront of a large audience. But that has nothing to do with firewalls. Also, in the old days every Windows and network share was displayed on the whole network anyways. Nothing ever happened to me. And while I think that is not a good argument at all, I feel protected enough by using the free software I do and roughly knowing how to use a computer. I don’t see a need to install a firewall just to feel better. Maybe that changes once my laptop is cluttered and I lose track of what software opens new ports.
On my server I use nftables. Drop everything and specifically allow the ports that I want to be open. In case I forget about an experiment or configure something entirely wrong (which also has happened) it adds a layer of protection there. I handle things differently because the server is directly connected to the internet and targeted, and my laptop is behind some router or firewall all the time. Additionally, I configured fail2ban and configured every service so it isn’t susceptible to brute-forcing the passwords. I’m currently learning about Web Application Firewalls. Maybe I’ll put ModSecurity in-front of my Nextcloud. But it should be alright on it’s own, I keep it updated and followed best practices when setting it up.
I really don’t have a good answer to that. Separating your various assortment of IoT devices from the rest of the network is probably a good idea. I personally would stop at that. I wouldn’t install cameras inside of my house and not buy an Alexa. I have a few smart lightbulbs and 2 thermostats, they communicate via Zigbee (and not Wifi), so that’s my separate network. And I indeed have a few Wifi IoT devices, a few plugs and an LED-strip. I took care to buy ones where I could hack the firmware and flash Tasmota or Esphome on them. So they run free software now and don’t connect to some manufacturers cloud. And I can keep them updated and hopefully without security vulnerabilities indefinitely, despite them originally being really cheap no-name stuff from china.
You can also set up a guest Wifi (for your guests) if you want to. I recently did, but didn’t bother to do it for many years. I feel I can trust my guests, we’re old enough now and outgrew the time when it was funny to mess with other people’s stuff, set an alarm to 3am or change the language to arabic. And all they can do is use my printer anyways. So I usually just give my wifi password to anyone who asks.
However, what I do might not be good advice for other people. I know people who don’t like to give their wifi credentials to anyone, since it could be used to do illegal stuff over the internet connection. That would backfire on who owns the internet connection and they’d face the legal troubles. That will also happen if it’s a guest wifi. I’m personally not a friend of that kind of legislation. If somebody uses my tools to commit a crime, I don’t think I should be held responsible for that. So I don’t participate in that fearmongering and just share my tools and internet connection anyways.
(And you don’t absolutely need to put in all of that effort at home. Companies need to do it, since sending all the employers home and then paying 6 figures to another company to analyze the attack and restore the data is very expensive. At home you’re somewhat unlikely to get targeted directly. You’ll just be probed by all the stuff that scans for vulnerable and old IoT devices, open RDP connections, SSH, insecure webservers and badly configured telephony boxes. Your home wifi router will do the bare minimum and the NAT on it will filter that out for you. Do Backups, though.)
That’s a bummer. There is not much you can do except obfuscate your traffic. Use something that runs on port 443 and looks like https (i think that’d be a TCP connection) or some other means of obfuscating the traffic. I think there are several approaches available.
Firewalld is capable of this – it can switch zones depending on the current connection.
There does still exist the risk of a vulnerability being pushed to whatever software that you use – this vulnerability would be essentially out of your control. This vulnerability could be used as a potential attack vector if all ports are available.
Interesting! I haven’t heard of this. Side note, out of curiosity, how did you go about installing your Nextcloud instance? Manual install? AIO? Snap?
It would be a rather difficult thing to prove – one could certainly just make the argument that you did, in that someone else that was on the guest network did something illegal. I would argue that it is most likely difficult to prove otherwise.
But this is a really difficult thing to protect from. If someone gets to push code on my computer that gets executed, I’m entirely out of luck. It could do anything that that process is allowed to do, send data, mess with my files and databases or delete stuff. I’m far more worried about the latter. Sandboxing and containerization are ways to mitigate for this. And it’s the reason why I like Linux distributions like Debian. There’s always the maintainers and other people who use the same software packages. If somebody should choose to inject malicious code into their software, or it gets bought and the new company adds trackers to it, it first has to pass the (Debian) maintainers. They’ll probably notice once they prepare the update (for Debian). And it gets rolled out to other people, too. They’ll probably notice and file a bugreport. And I’m going to read it in the news, since it’s something that rarely happens at all on Linux.
On the other hand it could happen not deliberately but just be vulnerable software. That happens and can be exploited and is exploited in the real world. I’m also forced to rely on other people to fix that before something happens to me. Again sandboxing and containerization help to contain it. And keeping everything updated is the proper answer to that.
What I’ve seen in the real world is a CMS being compromised. Joomla had lots of bugs and Wordpress, too. If people install lots of plugins and then also don’t update the CMS, let it rot and don’t maintain the server at all, after like 2 years(?) it can get compromised. The people who constantly probe all the internet servers will at some point find it and inject something like a rootkit and use the server to send spam, or upload viruses or phishing sites to it. You can pay Cloudflare $200 a month and hope they protect you from that, or use a Web Application Firewall and keep that up-to-date yourself, or just keep the software itself up-to-date. If you operate some online-services and there is some rivalry going on, it’s bound to happen faster. People might target your server and specifically scan that for vulnerabilities way earlier than the drive-by attacks get a hold of it. Ultimately there is no way around keeping a server maintained.
I have two: YunoHost powers my NAS at home. It contains all the big files and important vacation pictures etc. YunoHost is an AIO solution(?), an operating system based on Debian that aims at making hosting and administration simple and easy. And it is. You don’t have to worry too much to learn how to do all of the stuff correctly, since they do it for you. I’ve looked at the webserver config and so on and they seem to follow best practices, disallow old https ciphers, activate HSTS and all the stuff that makes cross site scripting and such attacks hard to impossible. And I pay for a small VPS. I used docker-compose and Docker on it. Read all the instructions and configured the reverse proxy myself. I also do some experimentation there in other Docker containers, try new software… But I don’t really like to maintain all that stuff. Nextcloud and Traefik seem somewhat stable. But I have to regularly fiddle with some of the other docker-compose files of other projects that change after a major update. I’m currently looking for a solution to make that easier and planning to rework that server. And then also run Lemmy, Matrix chat and a microblogging platform on it.
And it depends on where you live and the legislation there. If someone downloads some Harry Potter movies or uses your Wifi to send bomb threats to their school… They’ll log the IP and then contact the ISP and the Internet Service Provider is forced to tell them your name. You’ll get a letter or a visit from police. If they proceed and sue you, you’ll have to pay a lawyer to defend yourself and it’s a hassle. I think I’d call it coercion, but even if you’re in the right, they can temporarily make your life a misery. In Germany, we have the concept of “Störerhaftung” on top. Even if you’re not the offender yourself, being part of a crime willingly (or causally adequate(?))… You’re considered a “disruptor” and can be held responsible, especially to stop that “disruption”. I think it was meant get to people who technically don’t commit crimes themselves, they just deliberately enable other people to do it. For some time it got applied to WiFi here. The constitutional court had to rule and now I think it doesn’t really apply to that anymore. It’s complicated… I can’t sum it up in a few sentences. Nowadays they just send you letters, threatening to sue you and wanting a hundred euros for the lawyer who wrote the letter. They’ll say your argument is a defensive lie and you did it. Or you need to tell them exactly who did it and rat out on your friends/partner/kids or whoever did it. Of course that’s not how it works in the end but they’ll try to pressure people and I can imagine it is not an enjoyable situation to be in. I’ve never experienced it myself, I don’t download copyrighted stuff from the obvious platforms that are bound to get you in trouble and neither does anyone else in my close group of friends and family.
Sorry, hard disagree.
I assume you are assuming: 1.) You know about all open ports at all times, which is usually not the case 2.) There are no bugs/errors in the network stacks or services with open ports (e.g. you assume a port is only available to localhost) 3.) That there are no timing attacks which can easily be mitigated by a firewall 4.) That software one uses does not trigger/start other services transitively which then open ports you are not even aware of w/o constant port scanning
I agree with your point, that a server is a more controlled environment. Even then, as you pointed out, you want to rate limit bad login attempts via firewall/fail2ban etc. for the simple reason, that even a fully updated ssh server might use a weak key (because of errors/bugs in software/hardware during key generation) and to prevent timing attacks etc.
In summary: IMHO it is bad advice to tell people they don’t need a firewall, because it is demonstrably wrong and just confuses people like OP.
Sure, maybe I’ve worded things too factually and not differentiated between theory and practice. But,
Regarding the summary: I don’t think I want to advise people not to use a firewall. I thought this was a theoretical discussion about single arguments. And it’s complicated and confusing anyways. Which firewall do you run? The default Windows firewall is a completely different thing and setup than nftables and a Linux server that closes everything and only opens ports you specifically allow. Next question: How do you configure it? And where do you even run it? On a seperate host? Do you always rent 2 VPS? Do you do only do perimeter security for your LAN network and run a single firewall? Do you additionally run firewalls on all the connected computers in the network? Does that replace the firewall in front of them? What other means of security protection did you implement? As we said a firewall won’t necessarily protect against weak passwords and keys. And it might not be connected to the software that gets brute-forced and thus just forward the attack. In practice it’s really complicated and it always depends on the exact context. It is good practice to not allow everything by default, but take the approach to block everything and explicitly configure exceptions like a firewall does. It’s not the firewall but this concept behind it that helps.
I think I get you and the ‘theory vs. practice’ point you make is very valid. ;-) I mean, in theory my OS has software w/o bugs, is always up-to-date and 0-days do not exist. (Might even be true in practice for a default OpenBSD installation regarding remote vulnerabilities. :-P)
fail2ban absolutely mitigates a subset of timing attacks in its default setup. ;-)
LIMIT is a high level concept which can easily applied for ufw, don’t know about default setups of commonly used firewalls.
If someone exposes something like SSH or anything else w/o fail2ban/LIMIT IMHO that is grossly incompetent.
You are totally right, of course firewalls have bugs/errors/miss configurations… BUT … if you are using a Linux firewall, good chances are, that the firewall has been reviewed/attacked/pen tested more often and thoroughly than almost all other services reachable from the internet. So, if I have to choose between a potential attacker first hitting a well tested and maintained firewall software or a MySQL server, which got no love from Orcacle and lives in my distribution as an outdated package, I’ll put my money on the firewall every single time. ;-)
Thank you for pointing out that my arguments don’t necessarily apply to reality. Sometimes I answer questions too direct. And the question wasn’t “should I use a firewall” or I would have answered with “probably yes.”
I think I have to make a few slight corrections: I think we use the word “timing attack” differently. To me a timing attack is something that relies on the exact order or interval/distance packets arrive at. I was thinking of something like TOR does where it shuffles around packets, waits for a few milliseconds, merges them or maybe blows them up so they all have the same size. Brute forcing something isn’t exploiting the exact time where a certain packet arrives, it’s just sending many of them and the other side lets the attacker try an indefinite amount of passwords. But I wouldn’t put that in the same category with timing attacks.
Firewall vs MySQL: I don’t think that is a valid comparison. The firewall doesn’t necessarily look into the packets and detect that someone is running a SQL injection. Both do a very different job. And if the firewall doesn’t do deep-packet-inspection or rate limiting or something, it just forwards the attack to the service and it passes through anyways. And MySQL probably isn’t a good example since it rarely should be exposed to the internet in the first place. I’ve configured MariaDB just to listen on the internal interface and not to packets from other computers. Additionally I didn’t open the port in the firewall but MariaDB doesn’t listen on that interface anyways. Maybe a better comparison would be a webserver with https. The firewall can’t look into the packets because it’s encrypted traffic. It can’t tell apart an attack from a legitimate request and just forwards them to the webserver. Now it’s the same with or without a firewall. Or you terminate the encrypted traffic at the firewall, do packet inspection or complicated heuristics. But that shifts the complexity (including potential security vulberabilities in complex code) from the webserver to the firewall. And it’s a niche setup that also isn’t well tested. And you need to predict the attacks. If your software has known vulnerabilities that won’t get fixed, this is a valid approach. But you can’t know future attacks.
Having a return channel from the webserver/software to the firewall so the application can report an attack and order the firewall to block the traffic is a good thing. That’s what fail2ban is for. I think it should be included by default wherever possible.
I think there is no way around using well-written software if you expose it to the internet (like a webserver or a service that is used by other people.) If it doesn’t need to be exposed to the internet, don’t do it. Any means of assuring that are alright. For crappy software that is exposed and needs to be exposed, a firewall doesn’t do much. The correct tools for that are virtualization, containers, VPNs, and replacing that software… Maybe also the firewall if it can tell apart good and bad actors by some means. But most of the time that’s impossible for the firewall to tell.
I agree. You absolutely need to do something about security if you run services on the internet. I do and have ran a few services. And especially webserver-logs (especially if you have a wordpress install or some other commonly attacked CMS), SSH and Voice-over-IP servers get bombarded with automated attacks. Same for Remote-Desktop, Windows-Networkshares and IoT devices. If I disable fail2ban, the attackers ramp up the traffic and I can see attacks scroll through the logfiles all day.
I think a good approach is: