summaryrefslogtreecommitdiffstats
path: root/kernel/cgroup.c
diff options
context:
space:
mode:
authorTejun Heo2014-02-13 19:29:31 +0100
committerTejun Heo2014-02-19 00:23:18 +0100
commit532de3fc72adc2a6525c4d53c07bf81e1732083d (patch)
tree72ee4b0fe873589f814939a3b4e60fedb1f9c6ef /kernel/cgroup.c
parentRevert "cgroup: use an ordered workqueue for cgroup destruction" (diff)
downloadkernel-qcow2-linux-532de3fc72adc2a6525c4d53c07bf81e1732083d.tar.gz
kernel-qcow2-linux-532de3fc72adc2a6525c4d53c07bf81e1732083d.tar.xz
kernel-qcow2-linux-532de3fc72adc2a6525c4d53c07bf81e1732083d.zip
cgroup: update cgroup_enable_task_cg_lists() to grab siglock
Currently, there's nothing preventing cgroup_enable_task_cg_lists() from missing set PF_EXITING and race against cgroup_exit(). Depending on the timing, cgroup_exit() may finish with the task still linked on css_set leading to list corruption. Fix it by grabbing siglock in cgroup_enable_task_cg_lists() so that PF_EXITING is guaranteed to be visible. This whole on-demand cg_list optimization is extremely fragile and has ample possibility to lead to bugs which can cause things like once-a-year oops during boot. I'm wondering whether the better approach would be just adding "cgroup_disable=all" handling which disables the whole cgroup rather than tempting fate with this on-demand craziness. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com> Cc: stable@vger.kernel.org
Diffstat (limited to 'kernel/cgroup.c')
-rw-r--r--kernel/cgroup.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 68d87103b493..105f273b6f86 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2905,9 +2905,14 @@ static void cgroup_enable_task_cg_lists(void)
* We should check if the process is exiting, otherwise
* it will race with cgroup_exit() in that the list
* entry won't be deleted though the process has exited.
+ * Do it while holding siglock so that we don't end up
+ * racing against cgroup_exit().
*/
+ spin_lock_irq(&p->sighand->siglock);
if (!(p->flags & PF_EXITING) && list_empty(&p->cg_list))
list_add(&p->cg_list, &task_css_set(p)->tasks);
+ spin_unlock_irq(&p->sighand->siglock);
+
task_unlock(p);
} while_each_thread(g, p);
read_unlock(&tasklist_lock);