systemd and "passive" security dependencies for services?
Christoph Anton Mitterer
calestyo at scientia.net
Thu Jun 19 01:46:44 BST 2014
Hi Tollef, et al
Sorry for the long delay... I've forwarded that idea now to upstream:
https://bugs.freedesktop.org/show_bug.cgi?id=80169
Lennart told me that there is a new special target network-pre.target in
the next version... but I think it's not really a good solution for what
I want,... basically for the same reasons as network.target itself
(vaguely defined what it actually pulls in,... possibly too much than
what many services probably need)... see more in the ticket above.
And I guess the discussion (should you have any comments after this),
should move to the upstream ticket as well, and only really Debian
specific stuff should stay here.
On Mon, 2014-05-26 at 08:13 +0200, Tollef Fog Heen wrote:
> > Ah, I thought you were thinking, that the daemon/system wich brings
> > up/down any dynamic interfaces/addresses could take the duty of starting
> > any "dependencies" like <load netfilter rules> as soon as the even of an
> > new interface occurs.
>
> Given netfilter rules can be loaded before interfaces exist, that'd be
> pretty pointless.
Sure... than that was just some misunderstanding between us.
> > Well I think it's worse in how it's actually used, since with sysvinit,
> > simply "everyone" depended on $network, so one was automatically safe in
> > a way.
>
> No, you're not. If you think that and you care as much about security
> as you seem to imply you need to go back and recheck your findings.
Okay then please elaborate why.
As I said: I don't think one was safer with sysvinit from a technical
POV (actually sysvinit didn't care whether dependencies of a service
really started or not - it just ordered)
But with sysvinit it seems to have been just common behaviour for
services to depend on $network (thereby bringing in the reverse
dependency of e.g. iptables-persistent to that).
With systemd, by far not all services seem to depend on
network.target... just take Debian's ssh.service as an example... and
AFAIU, network.target isn't pulled in via the DefaultDependencies=
Or do I oversee something here?
> Because you can't even come up with a contrived use case, never mind a
> non-contrived one. Being an universal operating system doesn't mean
> «try to anticipate everything that can and can't possibly happen and
> cater to all of them on an equal footing». It means something closer to
> «try to accomodate a wide range of use cases».
Well, making sure that services don't start up before and at all when
something which is intended to secure the networking of such services
isn't in place yet - I don't think that this is just some obscure
scenario which only a handful of people would want.
> > SSH might be a special exception... since it's usually secure per
> > default.
>
> Any services not secure by default should surely not start by default.
> Nothing in the archive should be «install this and provide remote
> root».
Well but it's just illusionary that this is not the case.
Debian usually follows that - IMHO bad policy - that when a service is
installed it is immediately started with some config which is more or
less secure.
In cases like ejabberd/erlang you already have unsecured services that
listen to the wildcard interface (that weird erlang daemon).
In cases like Apache httpd there is the "it works pages", but practise
has shown that it might not be the best idea to export that as well.
I mean your argument here is: If a service is not secure, it should not
be started - but hey, if all services would be secure by themselves, why
do firewalls exist?
> > But even ssh may be used in a way where one strongly wishes to have
> > iptables in place... so in a case as SSH I would perhaps suggest to make
> > it easily configurable via debconf?
>
> I doubt you'll be able to interest Colin in that, but feel free to ask
> him. I think that idea is a terrible one.
Well I guess it's a matter of taste... some sysadmins may say something
like "we use password based authentication with ssh, but generally limit
access to ssh to some internal nodes, for which passwords are safe
enough. But we can't accept that ssh runs and allows authentication with
just passwords, when all nodes have access (since netfilter rules are
not loaded".
Others like you say "well I can accept such small possible security
issues or I work around them some way - but in any case, I cannot accept
an SSH that is not reachable just because of bogus netfilter rules"
In the end it's a matter of taste - my opinion is usually that things
should default to the more secure way.
> > > If you care about security you need to monitor it.
> >
> > Yeah,... well I wrote about that a bit later in this mail...
> > I don't think this is the right way to design systems:
> > i.e. designing them insecure... and monitor whether that insecure
> > condition really arises.
>
> No, you monitor for desired state.
Which is possible for systems which have a limited set of predefined
conditions like "webserver is up and running" or "pgsql can connect to
the server"... but you can't just check for every possible attack vector
- if all would be known than all would have been fixed long ago and we'd
never see any attacks.
Security isn't like a normal service that you monitor whether it's up or
not.
It's just like with e.g. a set of firewall: good rulesets don't just
blacklist a few things and default to ACCEPT everything else - they
rather DENY everything per default and ACCEPT exactly those packets
which are known to be desired.
> > Well, to be honest, that's quite some disturbing view about security.
> > An attacker usually doesn't have vacation when such "window" is open.
> >
> > Either such window exists and is there, than we should set up things in
> > a way, that per default it's closed... or it's not there, then we don't
> > have to act of course.
> You're here arguing for no services should start by default
I'd say the the system should come up to such a state that normal login
is possible for the problem to be fixed... whether this includes running
networking and SSH is as I've said rather a matter of taste.
The good thing of systemd however is that everyone can change a
Required[By]= very easily to a Wanted[By]= so I don't really see the
point of arguing about this.
The only thing which could possibly be worth arguing about is which
behaviour would be the default.
And I agree that most people will probably not accept if their sshd
wouldn't start up just because netfilter rules weren't loaded - even
though I'd say that from a security POV is a worse choice.
> we should
> not use DNS by default
depends on what you mean with DNS
Should a recursing or authoritative nameserver start if the rules aren't
loaded? I think not, both could very likely depend on netfilter rules
for their security - especially the recursing nameserver, since those
usually have to limited the range of their allowed users (DNS
aplification attacks and that like).
Should libc's or any other resolver be allowed to run... well of
course... this is not a service anyway.
> and the default iptables rules in the kernel
> should be to drop all traffic.
Actually I think it would be better if Debian ships a default set of
rules that ACCEPTs any outgoing traffic, but DROPs any incoming traffic
which is not from related/established connections (which is IIRC, the
default on some other distros).
But to be honest... I don't quite see what these arguments of yours have
to do with the question whether systemd should provide some simple
framework that allows users to have all their daemons depending on some
network security facilities like netfilter or fail2ban, without the need
to manually overwrite all the shipped unit files of daemons.
> Oh, and no way to break into debug mode
> in the boot loader. Only then have you closed most of the windows.
>
> Of course, then you don't have a usable system, so nobody would be
> interested in using it.
Well nobody was talking about debugging and what boot loader?!
Sorry, Tollef but:
- what I've proposed is a very simple framework that substantially
improves security, follows the core ideas of systemd and still very
easily allows admins to select per daemon and per "security
facility" (e.g. fail2ban, netfilter-persistent|ferm|shorewall, etc.)
whether they should be softly (Wantes[By]) or hardly (Requires[By])
depend on it.
- apart from special special cases like perhaps ssh, it should be clear
for anyone with some little understanding of security, that it's usually
a very very bad idea to start any daemons when e.g. netfilter rules and
similar things aren't loaded.
- all of this should be doable and configurable very easy... people
don't want to edit the unit files of all their services just to make
sure that this behaviour is guaranteed.
And I think with only little changes and teaching to authors of unit
files all this can be quite easily reached.
So far I couldn't find any substantial argument of yours against that
idea... and since you guys are the maintainers of systemd, I really
wonder that you're not actively in favour of such ideas where systemd
really allows things which were not possible before.
> > > If having netfilter rules is crucial to postfix' security (which I
> > > really hope is not true)
> > May be very easily the case, if you have e.g. mail submission secured
> > via the source address, which is commonly used.
>
> And that's a terrible design and we should not advocate it.
I see absolutely no reason why this should be terrible design? Actually
this is one of the main reasons why IETF introduced the submission port
in contrast to the normal SMTP port.
Why should a computing centre, which has tens of thousands of nodes (and
I'm administrating such) require all the internal hosts to do any form
of SMTP authentication to send their system mail... if I can just allow
access from my centre's subnet(s)?
> > The thing is... it's perfectly valid to secure services by iptables
> > (which is why we have it), so we must expect that people use it that way
> > and depend on it.
>
> No, it's not. The network is compromised and lies. Never trust the
> network. You need other authentication than weak authenticators like
> source IP.
In a internal switched network (aka the default in any computing centre)
you can very well trust your IPs.
And there are things like IPsec, OpenVPN... which give you even stronger
means.
And one does't use iptables just for "authentication"... perhaps you
just want to secure services which otherwise listen to the wildcard
iface... I could give you tons of examples for that.... and you simply
don't always have the chance to fix other's code.
> No, you shouldn't monitor if they're running. You should monitor if
> they're working.
Which port was it that you need to knock with your Nagios, which tells
you that everything is secure? 666?
o.O
> Knowing that your mail process is up and running is
> completely uninteresting if you can't deliver mail because /var/mail is
> full.
Well sure... but monitoring of services availability is completely out
of scope here...?!
What we're talking about is the question: is the networking of these
services secure?
When you have a postfix, were only clients that connect via IPsec should
be allowed to relay mail to non local destinations... how do you check
for that?
Have a nagios test for any possible source address, for IPv4, v6 any
possible port where a daemon might listen... and only if none of them
allows you to relay you say "fine... we're still secure... check returns
OK"?
I mean that's stupid.
Security can't be monitored like that - what about holes that wouldn't
be always open but just every now and then... even if you Nagios check
would test for all possibly existing holes, it simply may never run when
the hole is open.
> If you care about your iptables rules, you should monitor that the
> loaded rules match what you expect them to be, since as you note, rules
> failing to load won't break the boot.
Which an admin could easly change though as well... just let
network-secured.target not be WantedBy=sysinit.target, but RequiredBy=.
Why should I monitor at discrete time points for a condition that needs
to be met at every point in time, if I can guarantee the same via a
imperative system as systemd allows us here?
Why should I need to run a nagios/icinga instance on every node just to
make sure that my rules were loaded - and while that nagios would just
warn me about not loaded rules (not prevent that situation) and while it
would most probably miss an initial window where the rules are not
loaded any my system is vulnerable... it would continue to monitor it
forever which is completely useless since when they're loaded once
they're there and just magically go away without any admin interaction.
> I hope you see the irony in saying this at the same time as advocating
> trusting source IPs. Networking gear is routinely compromised, often
> before being delivered to your site.
You've heard of VPNs? IPsec? Rings a bell? ;-)
And as mentioned above... netfilter rules are not just about
"authenticating" (based on addresess).
> If it's not done and they require high-quality entropy, they're broken
> and should be fixed. As for /dev/random blocking: yes, that's a
> feature. It means you notice that your system has run out of
> high-quality entropy. You can have one or the other, not both.
There's no good reason for things that need high quality entropy not to
use a well seeded PRNG... and I think many systems (as crypsetup for
example, even though I don't like it there myself) and things like
mod_ssl do so.
Simply because they wouldn't work normally with /dev/random.
But that proves the necessity of my idea:
- *if* they use /dev/urandom ... you want to make sure it's very well
seeded, e.g. initially by systemd-random-seed-load.service or later on
by services that continuously re-seed it with HQ random entropy.
- *if* you can configure them to use /dev/random... you want to make
sure that your haveged/ekeyed or whatever is runnng, since otherwise
these services will simply block after few secs.
> > And even if you have some TRNG, or haveged or similar which feeds the
> > kernel entropy pool to provide a high bandwidth in /dev/random, then
> > this just emphasizes my point, namely that systemd should facilitate
> > that it's loaded very early.
>
> It will generally be loaded as soon as the hardware shows up on the bus.
Uhm don't understand that... I mean right now, e.g. haveged seems to
start because it's WantedBy=default.target - not because the device CPU
appeared.
And it seems illogical to me to start such service, just because it's
device has appeared it there's nothing that would use it.
> > Sure,... but the default should be the secure one, which is that a
> > service shouldn't start then.
>
> I don't agree with that, nor does current practice in Debian.
Well current practise doesn't mean it's given by god and a dogma.
If current practise would be written to stone than current practise
would by sysvinit, with systemd or anything else never having a
change ;-)
> We do allow incoming and outgoing connection by default.
Which is not the smartest thing as I've mentioned before... but these
are completely other construction sites - I don't quite understand why
you take "someone else is doing things insecurely" as a pro argument for
"we should do it as well"?
Actually the behaviour to allow incoming by default is really really
stupid, since most systems easily get things like ntpd, rpcbind
installed, which per default listen to the wildcard and are all known to
have their security issues.
> If you don't care to do it well, there might not be so much to do. If
> you care to do it well, there's a ton of work to do, but I (and I think
> I speak for the rest of the team) don't have the time and interest to do
> that.
Well I've anyway pulled the idea now to upstream, which would have been
anyway the way to go.
After all systemd is also about unification and more consistent
systems... and it would be bad if such framework as I've proposed it was
just available to Debian but not to other distros.
I think systemd started with the idea that most services should ship
their unit files upstream, with just requiring few modifications per
distro... which is also a reason why my proposal - should it come true -
should take place upstream.
> There's also nothing magic about built-in bits, you can just as
> easily do all the targets and special config outside of the systemd
> package.
Sure, but then I end up with rewriting every single unit file of any
service... and every sysadmin that wants to reach the same will have to
do it as well, perhaps making mistakes, leaving things vulnerable...
etc.
> > If other packages provide it, then which? iptables-persistent? ferm?
> > shorewall? Something new, which all other packages whose daemons would
> > Require= it would need to depend on?
>
> You seem to forget that units can use RequiredBy. Dependencies are not
> one-way in systemd.
Sure (I wrote often enough about RequiredBy before, that you can imagine
I've vaguely heard about it)... but that doesn't really help me.
With Requires= I need to add a line to the unit file of every daemon,
depending on e.g. netfilter-persistent.
With RequiredBy= I need to add a line to the unit file of
netfilter-persistent for every service I install.
Both very uncomfortable and especially error prone.
Cheers,
Chris.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 5313 bytes
Desc: not available
URL: <http://lists.alioth.debian.org/pipermail/pkg-systemd-maintainers/attachments/20140619/34e8dfc2/attachment.bin>
More information about the Pkg-systemd-maintainers
mailing list