systemd and "passive" security dependencies for services?
Tollef Fog Heen
tfheen at err.no
Mon May 26 07:13:55 BST 2014
]] Christoph Anton Mitterer
> On Fri, 2014-05-23 at 16:46 +0200, Tollef Fog Heen wrote:
>
> > > Sure... but that's how it is... I guess it's absolutely sure that there
> > > will always be software, which doesn't use e.g. NM (and given the issues
> > > I have with NM, I'm actually happy about that).
> > > But even if all software _would_ use NM (or something similar)... NM
> > > doesn't take care about e.g. netfilter rules.
> >
> > This has nothing to do with NM. Interfaces come and go on machines
> > running BGP daemons for instance. Or if you use IPv6 with
> > autoconfigured addresses.
>
> Ah, I thought you were thinking, that the daemon/system wich brings
> up/down any dynamic interfaces/addresses could take the duty of starting
> any "dependencies" like <load netfilter rules> as soon as the even of an
> new interface occurs.
Given netfilter rules can be loaded before interfaces exist, that'd be
pretty pointless.
>
> > > Sure... and don't get me wrong... I never tried to show that systemd is
> > > worse than sysvinit.
> > It's not about better or worse.
>
> Well I think it's worse in how it's actually used, since with sysvinit,
> simply "everyone" depended on $network, so one was automatically safe in
> a way.
No, you're not. If you think that and you care as much about security
as you seem to imply you need to go back and recheck your findings.
[...]
> > > Sure... what I meant was rather:
> > > Even though it's probably highly specialised... there may be setups
> > > which require the netfilter rules to beloaded even before the loopback
> > > is up.
> > >
> > > Though admittedly I haven't seen such...
> >
> > I'm not able to come up with a non-contrived case for that. Sure, if
> > somebody has compromised your init system, they can do bad things, but
> > that's not because they can talk to loopback, it's because they've
> > compromised your init system.
>
> Well doesn't Debian claim to be the Universal Operating System? So just
> because neither you or me can think now about some example, where have
> netfilter rules on loopback might be important, doesn't mean that there
> aren't any.
> So I think _if_ it's no big deal for us, to ensure that rules are loaded
> even before lo, then why not doing it?
Because you can't even come up with a contrived use case, never mind a
non-contrived one. Being an universal operating system doesn't mean
«try to anticipate everything that can and can't possibly happen and
cater to all of them on an equal footing». It means something closer to
«try to accomodate a wide range of use cases».
[...]
> SSH might be a special exception... since it's usually secure per
> default.
Any services not secure by default should surely not start by default.
Nothing in the archive should be «install this and provide remote
root».
> But even ssh may be used in a way where one strongly wishes to have
> iptables in place... so in a case as SSH I would perhaps suggest to make
> it easily configurable via debconf?
I doubt you'll be able to interest Colin in that, but feel free to ask
him. I think that idea is a terrible one.
> > If you care about security you need to monitor it.
>
> Yeah,... well I wrote about that a bit later in this mail...
> I don't think this is the right way to design systems:
> i.e. designing them insecure... and monitor whether that insecure
> condition really arises.
No, you monitor for desired state.
> If you depend on netfilter (and this is what one generally needs to
> assume) than you make your services not to start if it failed loading.
>
> You don't make some nagios which runs every day or so to check whether
> there are really some rules in place, or some port locked.
> This may be an additional security measure of course, but not the basic
> thing.
If your nagios checks runs once a day, you need to tune your nagios
setup. Really, this is pretty basic sysadmin stuff.
> > > 2) These "things" must be in place, _before_ the service is used... e.g.
> > > netfilter rules must be loaded, _before_ the daemons start, _before_ the
> > > user could do any networking in his sessions, etc. ... or e.g. a TRNG
> > > must be activated, _before_ services want to use high quality entropy.
> > >
> > > If not, then one might have a short time frame, where e.g. postfix
> > > listens and networking is not secured... or where httpd makes TLS
> > > connections, but it still uses poor entropy from right after the system
> > > boot.
> > No. In some cases, having that window where an attacker might attack a
> > surface is perfectly acceptable. In other cases, it's a big deal. It
> > depends on the context.
>
> Well, to be honest, that's quite some disturbing view about security.
> An attacker usually doesn't have vacation when such "window" is open.
>
> Either such window exists and is there, than we should set up things in
> a way, that per default it's closed... or it's not there, then we don't
> have to act of course.
You're here arguing for no services should start by default, we should
not use DNS by default and the default iptables rules in the kernel
should be to drop all traffic. Oh, and no way to break into debug mode
in the boot loader. Only then have you closed most of the windows.
Of course, then you don't have a usable system, so nobody would be
interested in using it.
[...]
> > If having netfilter rules is crucial to postfix' security (which I
> > really hope is not true)
> May be very easily the case, if you have e.g. mail submission secured
> via the source address, which is commonly used.
And that's a terrible design and we should not advocate it. It's barely
better than using .rhost files for securing access when logging in.
[...]
> The thing is... it's perfectly valid to secure services by iptables
> (which is why we have it), so we must expect that people use it that way
> and depend on it.
No, it's not. The network is compromised and lies. Never trust the
network. You need other authentication than weak authenticators like
source IP.
[...]
> And THIS is actually what you should monitor: Whether or not your
> services are running.
No, you shouldn't monitor if they're running. You should monitor if
they're working. Knowing that your mail process is up and running is
completely uninteresting if you can't deliver mail because /var/mail is
full.
If you care about your iptables rules, you should monitor that the
loaded rules match what you expect them to be, since as you note, rules
failing to load won't break the boot.
[...]
> So actually security done right _is_ black and white - I really wonder
> that so many people still don't get this after all the NSA scandal,
> where one can really see how powerful and sophisticated attacks are in
> practise.
I hope you see the irony in saying this at the same time as advocating
trusting source IPs. Networking gear is routinely compromised, often
before being delivered to your site.
> > If you want high-quality entropy, use /dev/random, not
> > urandom.
>
> Well that's simply not done by all services... and even
> if... /dev/random may block, which may be undesirable.
If it's not done and they require high-quality entropy, they're broken
and should be fixed. As for /dev/random blocking: yes, that's a
feature. It means you notice that your system has run out of
high-quality entropy. You can have one or the other, not both.
> And even if you have some TRNG, or haveged or similar which feeds the
> kernel entropy pool to provide a high bandwidth in /dev/random, then
> this just emphasizes my point, namely that systemd should facilitate
> that it's loaded very early.
It will generally be loaded as soon as the hardware shows up on the bus.
>
> > > b) What should happen, if the "thing" couldn't be started by systemd or
> > > if the daemon (if any) that runs "thing" fails/crashes.
> > Local policy.
>
> Sure,... but the default should be the secure one, which is that a
> service shouldn't start then.
I don't agree with that, nor does current practice in Debian.
> It may be local policy to allow remote root login without password - but
> just because this is an option, Debian doesn't default to it.
> It may be local policy to allow anyone in some local net in submitting
> mails via postfix - but just because this might be okay, we don't
> default to it.
> It may be local policy that TCP connections to postgresql are trusted -
> but we don't default to it.
We do allow incoming and outgoing connection by default. We do DNS
lookups by default. We do things like DHCP by default.
[...]
> > Lots of those targets can be supplied by the local admin or another
> > package. If you do that and convince people to use them, more power to
> > you. We don't have the time and energy to do so within the systemd
> > team.
>
> Well I don't think there's so much you'd have to do, is there?
If you don't care to do it well, there might not be so much to do. If
you care to do it well, there's a ton of work to do, but I (and I think
I speak for the rest of the team) don't have the time and interest to do
that. There's also nothing magic about built-in bits, you can just as
easily do all the targets and special config outside of the systemd
package.
[...]
> I also don't think it should be supplied locally or by other packages...
> actually it should rather go upstream.
Then take it upstream.
> If other packages provide it, then which? iptables-persistent? ferm?
> shorewall? Something new, which all other packages whose daemons would
> Require= it would need to depend on?
You seem to forget that units can use RequiredBy. Dependencies are not
one-way in systemd.
--
Tollef Fog Heen
UNIX is user friendly, it's just picky about who its friends are
More information about the Pkg-systemd-maintainers
mailing list