Linux/linux e1f1b15mm huge_memory.c

mm/huge_memory.c: fix data loss when splitting a file pmd

__split_huge_pmd_locked() must check if the cleared huge pmd was dirty,
and propagate that to PageDirty: otherwise, data may be lost when a huge
tmpfs page is modified then split then reclaimed.

How has this taken so long to be noticed?  Because there was no problem
when the huge page is written by a write system call (shmem_write_end()
calls set_page_dirty()), nor when the page is allocated for a write fault
(fault_dirty_shared_page() calls set_page_dirty()); but when allocated for
a read fault (which MAP_POPULATE simulates), no set_page_dirty().

Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1807111741430.1106@eggly.anvils
Fixes: d21b9e57c74c ("thp: handle file pages in split_huge_pmd()")
Signed-off-by: Hugh Dickins <hughd at google.com>
Reported-by: Ashwin Chaugule <ashwinch at google.com>
Reviewed-by: Yang Shi <yang.shi at linux.alibaba.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov at linux.intel.com>
Cc: "Huang, Ying" <ying.huang at intel.com>
Cc: <stable at vger.kernel.org>    [4.8+]
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
DeltaFile
+2-0mm/huge_memory.c
+2-01 files

UnifiedSplitRaw