summaryrefslogtreecommitdiffstats
path: root/drivers/base/power
Commit message (Collapse)AuthorAgeFilesLines
*---. Merge branches 'pm-sleep', 'pm-domains' and 'pm-avs'Rafael J. Wysocki2015-09-012-311/+77Star
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-sleep: PM / suspend: make sync() on suspend-to-RAM build-time optional PM / sleep: Allow devices without runtime PM to do direct-complete PM / autosleep: Use workqueue for user space wakeup sources garbage collector * pm-domains: PM / Domains: Fix typo in description of genpd_dev_pm_detach() PM / Domains: Remove unusable governor dummies PM / Domains: Make pm_genpd_init() available to modules PM / domains: Align column headers and data in pm_genpd_summary output PM / Domains: Return -EPROBE_DEFER if we fail to init or turn-on domain PM / Domains: Correct unit address in power-controller example PM / Domains: Remove intermediate states from the power off sequence * pm-avs: PM / AVS: rockchip-io: add io selectors and supplies for rk3368 PM / AVS: rockchip-io: depend on CONFIG_POWER_AVS
| | * | PM / Domains: Fix typo in description of genpd_dev_pm_detach()Jon Hunter2015-08-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function genpd_dev_pm_detach() detaches a device from a PM domain, however, in the description, the "dev" argument for the function is described as the device to "attach" instead of "detach". Correct this. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * | PM / Domains: Make pm_genpd_init() available to modulesRajendra Nayak2015-08-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Export symbol pm_genpd_init so it can be used in loadable kernel modules Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * | PM / domains: Align column headers and data in pm_genpd_summary outputGeert Uytterhoeven2015-08-291-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "domain": header is indented by 4, data by 0 spaces => 0 spaces "/device": header is indented by 11, data by 4 spaces => 4 spaces "slaves": header is indented by 47, data by 49 spaces => 48 spaces Ruler: 1234567890123456789012345678901234567890123456789012345678901234567890 Before: domain status slaves /device runtime status ---------------------------------------------------------------------- a3sp on a2us /devices/platform/e60b0000.i2c suspended After: domain status slaves /device runtime status ---------------------------------------------------------------------- a3sp on a2us /devices/platform/e60b0000.i2c suspended Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * | PM / Domains: Return -EPROBE_DEFER if we fail to init or turn-on domainJon Hunter2015-07-311-5/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a device is probed, the function dev_pm_domain_attach() is called to see if there is a power-domain that is associated with the device and needs to be turned on. If dev_pm_domain_attach() does not return -EPROBE_DEFER then the device will be probed. For devices using genpd, dev_pm_domain_attach() will call genpd_dev_pm_attach(). If genpd_dev_pm_attach() does not find a power domain associated with the device then it returns an error code not equal to -EPROBE_DEFER to allow the device to be probed. However, if genpd_dev_pm_attach() does find a power-domain that is associated with the device, then it does not return -EPROBE_DEFER on failure and hence the device will still be probed. Furthermore, genpd_dev_pm_attach() does not check the error code returned by pm_genpd_poweron() to see if the power-domain was turned on successfully. Fix this by checking the return code from pm_genpd_poweron() and returning -EPROBE_DEFER from genpd_dev_pm_attach on failure, if there is a power-domain associated with the device. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * | PM / Domains: Remove intermediate states from the power off sequenceUlf Hansson2015-07-311-301/+62Star
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Genpd's ->runtime_suspend() (assigned to pm_genpd_runtime_suspend()) doesn't immediately walk the hierarchy of ->runtime_suspend() callbacks. Instead, pm_genpd_runtime_suspend() calls pm_genpd_poweroff() which postpones that until *all* the devices in the genpd are runtime suspended. When pm_genpd_poweroff() discovers that the last device in the genpd is about to be runtime suspended, it calls __pm_genpd_save_device() for *all* the devices in the genpd sequentially. Furthermore, __pm_genpd_save_device() invokes the ->start() callback, walks the hierarchy of the ->runtime_suspend() callbacks and invokes the ->stop() callback. This causes a "thundering herd" problem. Let's address this issue by having pm_genpd_runtime_suspend() immediately walk the hierarchy of the ->runtime_suspend() callbacks, instead of postponing that to the power off sequence via pm_genpd_poweroff(). If the selected ->runtime_suspend() callback doesn't return an error code, call pm_genpd_poweroff() to see if it's feasible to also power off the PM domain. Adopting this change enables us to simplify parts of the code in genpd, for example the locking mechanism. Additionally, it gives some positive side effects, as described below. i) One device's ->runtime_resume() latency is no longer affected by other devices' latencies in a genpd. The complexity genpd has to support the option to abort the power off sequence suffers from latency issues. More precisely, a device that is requested to be runtime resumed, may end up waiting for __pm_genpd_save_device() to complete its operations for *another* device. That's because pm_genpd_poweroff() can't confirm an abort request while it waits for __pm_genpd_save_device() to return. As this patch removes the intermediate states in pm_genpd_poweroff() while powering off the PM domain, we no longer need the ability to abort that sequence. ii) Make pm_runtime[_status]_suspended() reliable when used with genpd. Until the last device in a genpd becomes idle, pm_genpd_runtime_suspend() will return 0 without actually walking the hierarchy of the ->runtime_suspend() callbacks. However, by returning 0 the runtime PM core considers the device as runtime_suspended, so pm_runtime[_status]_suspended() will return true, even though the device isn't (yet) runtime suspended. After this patch, since pm_genpd_runtime_suspend() immediately walks the hierarchy of the ->runtime_suspend() callbacks, pm_runtime[_status]_suspended() will accurately reflect the status of the device. iii) Enable fine-grained PM through runtime PM callbacks in drivers/subsystems. There are currently cases were drivers/subsystems implements runtime PM callbacks to deploy fine-grained PM (e.g. gate clocks, move pinctrl to power-save state, etc.). While using the genpd, pm_genpd_runtime_suspend() postpones invoking these callbacks until *all* the devices in the genpd are runtime suspended. In essence, one runtime resumed device prevents fine-grained PM for other devices within the same genpd. After this patch, since pm_genpd_runtime_suspend() immediately walks the hierarchy of the ->runtime_suspend() callbacks, fine-grained PM is enabled throughout all the levels of runtime PM callbacks. iiii) Enable fine-grained PM for IRQ safe devices Per the definition for an IRQ safe device, its runtime PM callbacks must be able to execute in atomic context. In the path while genpd walks the hierarchy of the ->runtime_suspend() callbacks for the device, it uses a mutex. Therefore, genpd prevents that path to be executed for IRQ safe devices. As this patch changes pm_genpd_runtime_suspend() to immediately walk the hierarchy of the ->runtime_suspend() callbacks and without needing to use a mutex, fine-grained PM is enabled throughout all the levels of runtime PM callbacks for IRQ safe devices. Unfortunately this patch also comes with a drawback, as described in the summary below. Driver's/subsystem's runtime PM callbacks may be invoked even when the genpd hasn't actually powered off the PM domain, potentially introducing unnecessary latency. However, in most cases, saving/restoring register contexts for devices are typically fast operations or can be optimized in device specific ways (e.g. shadow copies of register contents in memory, device-specific checks to see if context has been lost before restoring context, etc.). Still, in some cases the driver/subsystem may suffer from latency if runtime PM is used in a very fine-grained manner (e.g. for each IO request or xfer). To prevent that extra overhead, the driver/subsystem may deploy the runtime PM autosuspend feature. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Lina Iyer <lina.iyer@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * / PM / sleep: Allow devices without runtime PM to do direct-completeAlan Stern2015-07-211-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't unset the direct_complete flag on devices that have runtime PM disabled, if they are runtime suspended. This is needed because otherwise ancestor devices wouldn't be able to do direct_complete without adding runtime PM support to all its descendants. Also removes pm_runtime_suspended_if_enabled() because it's now unused. Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com> Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| |
| \
*-. \ Merge branches 'pm-cpuidle', 'pm-devfreq' and 'pm-clk'Rafael J. Wysocki2015-09-011-3/+1Star
|\ \ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-cpuidle: cpuidle/coupled: Remove redundant 'dev' argument of cpuidle_state_is_coupled() cpuidle/coupled: Remove cpuidle_device::safe_state_index intel_idle: Skylake Client Support intel_idle: allow idle states to be freeze-mode specific * pm-devfreq: PM / devfreq: exynos-ppmu: Update documentation to support PPMUv2 PM / devfreq: exynos-ppmu: Add the support of PPMUv2 for Exynos5433 PM / devfreq: event: Remove incorrect property in exynos-ppmu DT binding * pm-clk: PM / clk: don't return int on __pm_clk_enable()
| | * PM / clk: don't return int on __pm_clk_enable()Colin Ian King2015-07-171-3/+1Star
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Static analysis by cppcheck found an issue that was recently introduced by commit 471f7707b6f0b1 ("PM / clock_ops: make __pm_clk_enable more generic") where a return status in ret was not being initialised and garbage being returned when ce->status >= PCE_STATUS_ERROR. The fact that ret is not being checked by the caller and that ret is only used internally __pm_clk_enable() to check if clk_enable() was OK means we can ignore returning it instead turn __pm_clk_enable() into function with a void return. Fixes: 471f7707b6f0b1 ("PM / clock_ops: make __pm_clk_enable more generic") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | Merge branch 'pm-opp'Rafael J. Wysocki2015-09-011-181/+822
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-opp: PM / OPP: Drop unlikely before IS_ERR(_OR_NULL) PM / OPP: Fix static checker warning (broken 64bit big endian systems) PM / OPP: Free resources and properly return error on failure cpufreq-dt: make scaling_boost_freqs sysfs attr available when boost is enabled cpufreq: dt: Add support for turbo/boost mode cpufreq: dt: Add support for operating-points-v2 bindings cpufreq: Allow drivers to enable boost support after registering driver cpufreq: Update boost flag while initializing freq table from OPPs PM / OPP: add dev_pm_opp_is_turbo() helper PM / OPP: Add helpers for initializing CPU OPPs PM / OPP: Add support for opp-suspend PM / OPP: Add OPP sharing information to OPP library PM / OPP: Add clock-latency-ns support PM / OPP: Add support to parse "operating-points-v2" bindings PM / OPP: Break _opp_add_dynamic() into smaller functions PM / OPP: Allocate dev_opp from _add_device_opp() PM / OPP: Create _remove_device_opp() for freeing dev_opp PM / OPP: Relocate few routines PM / OPP: Create a directory for opp bindings PM / OPP: Update bindings to make opp-hz a 64 bit value
| * | PM / OPP: Drop unlikely before IS_ERR(_OR_NULL)Viresh Kumar2015-08-281-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | IS_ERR(_OR_NULL) already contain an 'unlikely' compiler flag and there is no need to do that again from its callers. Drop it. Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Fix static checker warning (broken 64bit big endian systems)Viresh Kumar2015-08-281-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dan Carpenter reported (generated with static checker): drivers/base/power/opp.c:949 _opp_add_static_v2() warn: passing casted pointer '&new_opp->clock_latency_ns' to 'of_property_read_u32()' 64 vs 32. This code will break on 64 bit, big endian machines. Fix this by reading the value in a u32 type variable first and then assigning it to the unsigned long variable. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Suggested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Free resources and properly return error on failureViresh Kumar2015-08-281-14/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | _of_init_opp_table_v2() isn't freeing up resources on some errors and the error values returned are also not correct always. This fixes following problems: - Return -ENOENT, if no entries are found in the table. - Use IS_ERR() to properly check return value of _find_device_opp(). - Return error value with PTR_ERR() in above case. - Free table if _find_device_opp() fails. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: add dev_pm_opp_is_turbo() helperBartlomiej Zolnierkiewicz2015-08-071-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add dev_pm_opp_is_turbo() helper to verify if an opp is to be used only for turbo mode or not. Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Add helpers for initializing CPU OPPsViresh Kumar2015-08-071-0/+175
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With "operating-points-v2" its possible to tell which devices share OPPs. We already have infrastructure to decode that information. This patch adds following APIs: - of_get_cpus_sharing_opps: Returns cpumask of CPUs sharing OPPs (only valid with v2 bindings). - of_cpumask_init_opp_table: Initializes OPPs for all CPUs present in cpumask. - of_cpumask_free_opp_table: Frees OPPs for all CPUs present in cpumask. - set_cpus_sharing_opps: Sets which CPUs share OPPs (only valid with old OPP bindings, as this information isn't present in DT). Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Add support for opp-suspendViresh Kumar2015-08-071-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With "operating-points-v2" bindings, it's possible to specify the OPP to which the device must be switched, before suspending. This patch adds support for getting that information. Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Add OPP sharing information to OPP libraryViresh Kumar2015-08-071-24/+150
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An opp can be shared by multiple devices, for example its very common for CPUs to share the OPPs, i.e. when they share clock/voltage rails. This patch adds support of shared OPPs to the OPP library. Instead of a single device, dev_opp will now contain a list of devices that use it. It also senses if the device (we are trying to initialize OPPs for) shares OPPs with a device added earlier and in that case we update the list of devices managed by OPPs instead of duplicating OPPs again. The same infrastructure will be used for the old OPP bindings, with later patches. Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Add clock-latency-ns supportViresh Kumar2015-08-071-2/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With "operating-points-v2" bindings, clock-latency is defined per OPP. Users of this value expect a single value which defines the latency to switch to any clock rate. Find maximum clock-latency-ns from the OPP table to service requests from such users. Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Add support to parse "operating-points-v2" bindingsViresh Kumar2015-08-071-24/+233
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support in OPP library to parse and create list of OPPs from operating-points-v2 bindings. It takes care of most of the properties of new bindings (except shared-opp, which will be handled separately). For backward compatibility, we keep supporting earlier bindings. We try to search for the new bindings first, in case they aren't present we look for the old deprecated ones. There are few things marked as TODO: - Support for multiple OPP tables - Support for multiple regulators They should be fixed separately. Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Break _opp_add_dynamic() into smaller functionsViresh Kumar2015-08-071-49/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Later commits would add support for new OPP bindings and this would be required then. So, lets do it in a separate patch to make it easily reviewable. Another change worth noticing is INIT_LIST_HEAD(&opp->node). We weren't doing it earlier as we never tried to delete a list node before it is added to list. But this wouldn't be the case anymore. We might try to delete a node (just to reuse the same code paths), without it being getting added to the list. Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Allocate dev_opp from _add_device_opp()Viresh Kumar2015-08-071-23/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no need to complicate _opp_add_dynamic() with allocation of dev_opp as well. Allocate it from _add_device_opp() instead. Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Create _remove_device_opp() for freeing dev_oppViresh Kumar2015-08-071-5/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This will be used from multiple places later. Lets create a separate routine for that. Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | PM / OPP: Relocate few routinesViresh Kumar2015-08-071-138/+139
| |/ | | | | | | | | | | | | | | | | | | In order to prepare for the later commits, this relocates few routines towards the top as they will be used earlier in the code. Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* / PM / QoS: Make it possible to expose device latency tolerance to userspaceMika Westerberg2015-07-283-0/+50
|/ | | | | | | | | | | | | | | | | Typically when a device is created the bus core it belongs to (for example PCI) does not know if the device supports things like latency tolerance. This is left to the driver that binds to the device in question. However, at that time the device has already been created and there is no way to set its dev->power.set_latency_tolerance anymore. So follow what has been done for other PM QoS attributes as well and allow drivers to expose and hide latency tolerance from userspace, if the device supports it. Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
*-. Merge branches 'acpi-pnp', 'acpi-soc', 'pm-domains' and 'pm-sleep'Rafael J. Wysocki2015-07-071-2/+11
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * acpi-pnp: ACPI / PNP: Reserve ACPI resources at the fs_initcall_sync stage * acpi-soc: ACPI / LPSS: Fix up acpi_lpss_create_device() * pm-domains: PM / Domains: Avoid infinite loops in attach/detach code * pm-sleep: PM / hibernate: clarify resume documentation
| | * PM / Domains: Avoid infinite loops in attach/detach codeGeert Uytterhoeven2015-07-071-2/+11
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If pm_genpd_{add,remove}_device() keeps on failing with -EAGAIN, we end up with an infinite loop in genpd_dev_pm_{at,de}tach(). This may happen due to a genpd.prepared_count imbalance. This is a bug elsewhere, but it will result in a system lock up, possibly during reboot of an otherwise functioning system. To avoid this, put a limit on the maximum number of loop iterations, using an exponential back-off mechanism. If the limit is reached, the operation will just fail. An error message is already printed. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | Merge branch 'pm-wakeirq'Rafael J. Wysocki2015-07-072-28/+15Star
|\ \ | |/ |/| | | | | * pm-wakeirq: PM / wakeirq: Avoid setting power.wakeirq too hastily
| * PM / wakeirq: Avoid setting power.wakeirq too hastilyRafael J. Wysocki2015-07-072-28/+15Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If dev_pm_attach_wake_irq() fails, the device's power.wakeirq field should not be set to point to the struct wake_irq passed to that function, as that object will be freed going forward. For this reason, make dev_pm_attach_wake_irq() first call device_wakeup_attach_irq() and only set the device's power.wakeirq field if that's successful. That requires device_wakeup_attach_irq() to be called under the device's power.lock lock, but since dev_pm_attach_wake_irq() is the only caller of it, the requisite changes are easy to make. Fixes: 4990d4fe327b (PM / Wakeirq: Add automated device wake IRQ handling) Reported-by: Felipe Balbi <balbi@ti.com> Tested-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| |
| \
| \
| \
*---. \ Merge branches 'pm-clk', 'pm-domains' and 'powercap'Rafael J. Wysocki2015-06-192-18/+67
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-clk: PM / clk: Print acquired clock name in addition to con_id PM / clk: Fix clock error check in __pm_clk_add() drivers: sh: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS arm: davinci: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS arm: omap1: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS arm: keystone: remove boilerplate code and use USE_PM_CLK_RUNTIME_OPS PM / clock_ops: Provide default runtime ops to users * pm-domains: PM / Domains: Skip timings during syscore suspend/resume * powercap: powercap / RAPL: Support Knights Landing powercap / RAPL: Floor frequency setting in Atom SoC
| | * | | PM / Domains: Skip timings during syscore suspend/resumeGeert Uytterhoeven2015-06-151-16/+26
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PM Domain code uses ktime_get() to perform various latency measurements. However, if ktime_get() is called while timekeeping is suspended, the following warning is printed: WARNING: CPU: 0 PID: 1340 at kernel/time/timekeeping.c:576 ktime_get+0x3 This happens when resuming the PM Domain that contains the clock events source, which calls pm_genpd_syscore_poweron(). Chain of operations is: timekeeping_resume() { clockevents_resume() sh_cmt_clock_event_resume() pm_genpd_syscore_poweron() pm_genpd_sync_poweron() genpd_syscore_switch() genpd_power_on() ktime_get(), but timekeeping_suspended == 1 ... timekeeping_suspended = 0; } Fix this by adding a "timed" parameter to genpd_power_{on,off}() and pm_genpd_sync_power{off,on}(), to indicate whether latency measurements are allowed. This parameter is passed as false in genpd_syscore_switch() (i.e. during syscore suspend/resume), and true in all other cases. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | | PM / clk: Print acquired clock name in addition to con_idGeert Uytterhoeven2015-06-151-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the con_id of the acquired clock is printed for debugging purposes. But in several cases, the con_id is NULL, which doesn't provide much debugging information when printed. These cases are: - When explicitly passing a NULL con_id (which means the first clock tied to the device, if available), - When not using pm_clk_add(), but pm_clk_add_clk() (which takes a "struct clk *" directly). Hence print the actual clock name in addition to (and not instead of; thanks Grygorii Strashko!) the con_id. Note that the clock name is not available with legacy clock frameworks, and the hex pointer address will be printed instead. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Grygorii Strashko <grygorii.strashko@linaro.org> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | | PM / clk: Fix clock error check in __pm_clk_add()Geert Uytterhoeven2015-05-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the final iteration of commit 245bd6f6af8a62a2 ("PM / clock_ops: Add pm_clk_add_clk()"), a refcount increment was added by Grygorii Strashko. However, the accompanying IS_ERR() check operates on the wrong clock pointer, which is always zero at this point, i.e. not an error. This may lead to a NULL pointer dereference later, when __clk_get() tries to dereference an error pointer. Check the passed clock pointer instead to fix this. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Fixes: 245bd6f6af8a62a2 ("PM / clock_ops: Add pm_clk_add_clk()") Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | | PM / clock_ops: Provide default runtime ops to usersRajendra Nayak2015-05-121-0/+38
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most users of PM clocks do the extact same things in the runtime suspend/resume callbacks. Provide them USE_PM_CLK_RUNTIME_OPS so as to avoid/remove boilerplate code. Signed-off-by: Rajendra Nayak <rnayak@codeaurora.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | | Merge branch 'pm-wakeirq'Rafael J. Wysocki2015-06-196-1/+422
|\ \ \ | | |/ | |/| | | | | | | | | | * pm-wakeirq: PM / wakeirq: Fix typo in prototype for dev_pm_set_dedicated_wake_irq PM / Wakeirq: Add automated device wake IRQ handling
| * | PM / Wakeirq: Add automated device wake IRQ handlingTony Lindgren2015-05-206-1/+422
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Turns out we can automate the handling for the device_may_wakeup() quite a bit by using the kernel wakeup source list as suggested by Rafael J. Wysocki <rjw@rjwysocki.net>. And as some hardware has separate dedicated wake-up interrupt in addition to the IO interrupt, we can automate the handling by adding a generic threaded interrupt handler that just calls the device PM runtime to wake up the device. This allows dropping code from device drivers as we currently are doing it in multiple ways, and often wrong. For most drivers, we should be able to drop the following boilerplate code from runtime_suspend and runtime_resume functions: ... device_init_wakeup(dev, true); ... if (device_may_wakeup(dev)) enable_irq_wake(irq); ... if (device_may_wakeup(dev)) disable_irq_wake(irq); ... device_init_wakeup(dev, false); ... We can replace it with just the following init and exit time code: ... device_init_wakeup(dev, true); dev_pm_set_wake_irq(dev, irq); ... dev_pm_clear_wake_irq(dev); device_init_wakeup(dev, false); ... And for hardware with dedicated wake-up interrupts: ... device_init_wakeup(dev, true); dev_pm_set_dedicated_wake_irq(dev, irq); ... dev_pm_clear_wake_irq(dev); device_init_wakeup(dev, false); ... Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | |
| \ \
*-. | | Merge branches 'pm-sleep' and 'pm-runtime'Rafael J. Wysocki2015-06-193-6/+60
|\ \| | | |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-sleep: PM / sleep: trace_device_pm_callback coverage in dpm_prepare/complete PM / wakeup: add a dummy wakeup_source to record statistics PM / sleep: Make suspend-to-idle-specific code depend on CONFIG_SUSPEND PM / sleep: Return -EBUSY from suspend_enter() on wakeup detection PM / tick: Add tracepoints for suspend-to-idle diagnostics PM / sleep: Fix symbol name in a comment in kernel/power/main.c leds / PM: fix hibernation on arm when gpio-led used with CPU led trigger ARM: omap-device: use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS bus: omap_l3_noc: add missed callbacks for suspend-to-disk PM / sleep: Add macro to define common noirq system PM callbacks PM / sleep: Refine diagnostic messages in enter_state() PM / wakeup: validate wakeup source before activating it. * pm-runtime: PM / Runtime: Update last_busy in rpm_resume PM / runtime: add note about re-calling in during device probe()
| | * PM / Runtime: Update last_busy in rpm_resumeTony Lindgren2015-05-201-0/+1
| |/ |/| | | | | | | | | | | | | | | | | If we don't update last_busy in rpm_resume, devices can go back to sleep immediately after resume. This happens at least in cases where the device has been powered off and does not have any interrupt pending until there's something in the FIFO. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * PM / sleep: trace_device_pm_callback coverage in dpm_prepare/completeTodd E Brandt2015-06-101-6/+5Star
| | | | | | | | | | | | | | | | | | | | Move the trace_device_pm_callback locations for dpm_prepare and dpm_complete to encompass the attempt to capture the device mutex prior to callback. This is needed by analyze_suspend to identify gaps in the trace output caused by the delay in locking the mutex for a device. Signed-off-by: Todd Brandt <todd.e.brandt@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * PM / wakeup: add a dummy wakeup_source to record statisticsJin Qian2015-05-191-0/+36
| | | | | | | | | | | | | | | | | | After a wakeup_source is destroyed, we lost all information such as how long this wakeup_source has been active. Add a dummy wakeup_source to record such info. Signed-off-by: Jin Qian <jinqian@android.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * PM / wakeup: validate wakeup source before activating it.Jin Qian2015-05-081-0/+18
|/ | | | | | | | | A rogue wakeup source not registered in wakeup_sources list is not visible from wakeup_sources_stats_show. Check if the wakeup source is registered properly by looking at the timer struct. Signed-off-by: Jin Qian <jinqian@android.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* power: wakeup: remove use of seq_printf return valueJoe Perches2015-04-161-9/+7Star
| | | | | | | | | | | | | | | | The seq_printf return value, because it's frequently misused, will eventually be converted to void. See: commit 1f33c41c03da ("seq_file: Rename seq_overflow() to seq_has_overflowed() and make public") Signed-off-by: Joe Perches <joe@perches.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Pavel Machek <pavel@ucw.cz> Cc: Len Brown <len.brown@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
*-. Merge branches 'pm-sleep' and 'pm-domains'Rafael J. Wysocki2015-04-133-28/+68
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-sleep: PM / watchdog: iTCO: stop watchdog during system suspend PM / sleep: add pm-trace support for suspending phase PM / sleep: add configurable delay for pm_test * pm-domains: PM / domains: avoid potential oops in pm_genpd_remove_device() PM / domains: factor out code to get the generic PM domain from a struct device PM / domains: quieten down generic pm domains PM / Domains: Sync runtime PM status with genpd after probe driver core / PM: Add PM domain callbacks for device setup/cleanup MAINTAINERS: add entry for Generic PM domains (genpd)
| | * PM / domains: avoid potential oops in pm_genpd_remove_device()Russell King2015-03-231-3/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pm_genpd_remove_device() tries hard to validate the generic PM domain passed to it, but the validation is not complete. dev->pm_domain contains a struct dev_pm_domain, which is the "base class" of generic PM domains. Other users of dev_pm_domains include stuff like vga_switheroo. Hence, a device could have a generic PM domain or a vga_switcheroo PM domain in dev->pm_domain. We need ot be certain that the PM domain is actually valid before we try to remove it. We can do this easily as we have a way to get the current validated generic PM domain for a struct device. This must match the generic PM domain being requested for removal. Convert the code to use this alternative validation method instead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Kevin Hilman <khilman@linaro.org> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * PM / domains: factor out code to get the generic PM domain from a struct deviceRussell King2015-03-221-14/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PM domain code contains two methods to get the generic PM domain for a struct device. One is dev_to_genpd() which is only safe when we know for certain that the device has a generic PM domain attached. The other is coded into genpd_dev_pm_detach() which ensures that the PM domain in the struct device is a generic PM domain (and so is safer). This commit factors out the safer version, documents it, and hides the unsafe dev_to_genpd(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * PM / domains: quieten down generic pm domainsRussell King2015-03-221-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PM domains are rather noisy; scheduling behaviour can cause callbacks to take longer, which causes them to spit out a warning-level message each time a callback takes a little longer than the previous time. There really isn't a need for this, except when debugging. Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * PM / Domains: Sync runtime PM status with genpd after probeRussell King2015-03-221-0/+12
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Buses which currently supports attaching devices to their PM domains, will invoke the dev_pm_domain_attach() API from their ->probe() callbacks. During the attach procedure, genpd power up the PM domain. In those scenarios where the bus/driver don't need to access its device during probe, it may leave it in runtime PM suspended state since that's also the default state. In that way, no notifications through the runtime PM callbacks will reach the PM domain during probe. For genpd, the consequence from the above scenario means the PM domain will remain powered. Therefore, implement the struct dev_pm_domain's ->sync() callback, which is invoked from driver core after the bus/driver has probed the device. It allows genpd to power off the PM domain if it's unused. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> [ Ulf: Updated patch according to updates in driver core ] Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * Merge back earlier suspend/hibernate material for v4.1.Rafael J. Wysocki2015-04-102-7/+19
|/|
| * PM / sleep: add pm-trace support for suspending phaseZhonghui Fu2015-03-182-7/+19
| | | | | | | | | | | | | | | | | | | | | | Occasionally, the system can't come back up after suspend/resume due to problems of device suspending phase. This patch make PM_TRACE infrastructure cover device suspending phase of suspend/resume process, and the information in RTC can tell developers which device suspending function make system hang. Signed-off-by: Zhonghui Fu <zhonghui.fu@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| |
| \
*-. | Merge branches 'pm-domains' and 'pm-cpufreq'Rafael J. Wysocki2015-03-061-12/+12
|\ \| | | | | | | | | | | | | | | | | | | | | | * pm-domains: PM / Domains: cleanup: rename gpd -> genpd in debugfs interface * pm-cpufreq: cpufreq: ppc: Add missing #include <asm/smp.h>
| * | PM / Domains: cleanup: rename gpd -> genpd in debugfs interfaceKevin Hilman2015-03-041-12/+12
| |/ | | | | | | | | | | | | | | | | | | | | | | To keep consisitency with the rest of the file, use 'genpd' as the name of the 'struct generic_pm_domain' pointer instead of 'gpd'. This is just a rename, no functional changes. Signed-off-by: Kevin Hilman <khilman@linaro.org> Acked-by: Pavel Machek <pavel@ucw.cz> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>