On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
[...]
> 1. Memory safety for user space code. Once the secret memory
is
> allocated, the user can't accidentally pass it into the
> kernel to be
> transmitted somewhere.
In my first read through, I didn't see how cross-userspace operations
were blocked, but it looks like it's the various gup paths where
{vma,page}_is_secretmem() is called. (Thank you for the self-test!
That helped me follow along.) I think this access pattern should be
more clearly spelled out in the cover later (i.e. "This will block
things like process_vm_readv()").
I'm sure Mike can add it.
I like the results (inaccessible outside the process), though I
suspect this will absolutely melt gdb or other ptracers that try to
see into the memory.
I wouldn't say "melt" ... one of the Demos we did a FOSDEM was using
gdb/ptrace to extract secrets and then showing it couldn't be done if
secret memory was used. You can still trace the execution of the
process (and thus you could extract the secret as it's processed in
registers, for instance) but you just can't extract the actual secret
memory contents ... that's a fairly limited and well defined
restriction.
Don't get me wrong, I'm a big fan of such concepts[0], but I
see
nothing in the cover letter about it (e.g. the effects on "ptrace" or
"gdb" are not mentioned.)
Sure, but we thought "secret" covered it. It wouldn't be secret if
gdb/ptrace from another process could see it.
There is also a risk here of this becoming a forensics nightmare:
userspace malware will just download their entire executable region
into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
example, certainly doesn't need PROT_EXEC.
I think disallowing PROT_EXEC is a great enhancement.
What's happening with O_CLOEXEC in this code? I don't see
that
mentioned in the cover letter either. Why is it disallowed? That
seems a strange limitation for something trying to avoid leaking
secrets into other processes.
I actually thought we forced it, so I'll let Mike address this. I
think allowing it is great, so the secret memory isn't inherited by
children, but I can see use cases where a process would want its child
to inherit the secrets.
And just so I'm sure I understand: if a vma_is_secretmem() check
is
missed in future mm code evolutions, it seems there is nothing to
block the kernel from accessing the contents directly through
copy_from_user() via the userspace virtual address, yes?
Technically no because copy_from_user goes via the userspace page
tables which do have access.
> 2. It also serves as a basis for context protection of
virtual
> machines, but other groups are working on this aspect, and it
> is
> broadly similar to the secret exfiltration from the kernel
> problem.
>
> > Is this intended to protect keys/etc after the attacker has
> > gained the ability to run arbitrary kernel-mode code? If so,
> > that seems optimistic, doesn't it?
>
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget. The object of this code is
> to be completely secure against root trying to extract the secret
> (some what similar to the lockdown idea), thus defeating privilege
> escalation and to provide "sufficient" protection against ROP
> gadgets.
>
> The ROP gadget thing needs more explanation: the usual defeatist
> approach is to say that once the attacker gains the stack, they can
> do anything because they can find enough ROP gadgets to be turing
> complete. However, in the real world, given the kernel stack size
> limit and address space layout randomization making finding gadgets
> really hard, usually the attacker gets one or at most two gadgets
> to string together. Not having any in-kernel primitive for
> accessing secret memory means the one gadget ROP attack can't
> work. Since the only way to access secret memory is to reconstruct
> the missing mapping entry, the attacker has to recover the physical
> page and insert a PTE pointing to it in the kernel and then
> retrieve the contents. That takes at least three gadgets which is
> a level of difficulty beyond most standard attacks.
As for protecting against exploited kernel flaws I also see benefits
here. While the kernel is already blocked from directly reading
contents from userspace virtual addresses (i.e. SMAP), this feature
does help by blocking the kernel from directly reading contents via
the direct map alias. (i.e. this feature is a specialized version of
XPFO[1], which tried to do this for ALL user memory.) So in that
regard, yes, this has value in the sense that to perform
exfiltration, an attacker would need a significant level of control
over kernel execution or over page table contents.
Sufficient control over PTE allocation and positioning is possible
without kernel execution control[3], and "only" having an arbitrary
write primitive can lead to direct PTE control. Because of this, it
would be nice to have page tables strongly protected[2] in the
kernel. They remain a viable "data only" attack given a sufficiently
"capable" write flaw.
Right, but this is on the radar of several people and when fixed will
strengthen the value of secret memory.
I would argue that page table entries are a more important asset to
protect than userspace secrets, but given the difficulties with XPFO
and the not-yet-available PKS I can understand starting here. It
does, absolutely, narrow the ways exploits must be written to
exfiltrate secret contents. (We are starting to now constrict[4] many
attack methods into attacking the page table itself, which is good in
the sense that protecting page tables will be a big win, and bad in
the sense that focusing attack research on page tables means we're
going to see some very powerful attacks.)
> > I think that a very complete description of the threats which
> > this feature addresses would be helpful.
>
> It's designed to protect against three different threats:
>
> 1. Detection of user secret memory mismanagement
I would say "cross-process secret userspace memory exposures" (via a
number of common interfaces by blocking it at the GUP level).
> 2. significant protection against privilege escalation
I don't see how this series protects against privilege escalation.
(It protects against exfiltration.) Maybe you mean include this in
the first bullet point (i.e. "cross-process secret userspace memory
exposures, even in the face of privileged processes")?
It doesn't prevent privilege escalation from happening in the first
place, but once the escalation has happened it protects against
exfiltration by the newly minted root attacker.
> 3. enhanced protection (in conjunction with all the other
in-
> kernel
> attack prevention systems) against ROP attacks.
Same here, I don't see it preventing ROP, but I see it making
"simple" ROP insufficient to perform exfiltration.
Right, that's why I call it "enhanced protection". With ROP the design
goal is to take exfiltration beyond the simple, and require increasing
complexity in the attack ... the usual security whack-a-mole approach
... in the hope that script kiddies get bored by the level of
difficulty and move on to something easier.
James