diff options
author | Linus Torvalds | 2017-11-17 23:34:42 +0100 |
---|---|---|
committer | Linus Torvalds | 2017-11-17 23:34:42 +0100 |
commit | f6705bf959efac87bca76d40050d342f1d212587 (patch) | |
tree | e199b124c6067a92be7f4727538ffc721670fc28 /drivers/gpu/drm/amd/display/dc/dce112 | |
parent | Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/l... (diff) | |
parent | Merge branch 'drm-next-4.15-dc' of git://people.freedesktop.org/~agd5f/linux ... (diff) | |
download | kernel-qcow2-linux-f6705bf959efac87bca76d40050d342f1d212587.tar.gz kernel-qcow2-linux-f6705bf959efac87bca76d40050d342f1d212587.tar.xz kernel-qcow2-linux-f6705bf959efac87bca76d40050d342f1d212587.zip |
Merge tag 'drm-for-v4.15-amd-dc' of git://people.freedesktop.org/~airlied/linux
Pull amdgpu DC display code for Vega from Dave Airlie:
"This is the pull request for the AMD DC (display code) layer which is
a requirement to program the display engines on the new Vega and Raven
based GPUs. It also contains support for all amdgpu supported GPUs
(CIK, VI, Polaris), which has to be enabled. It is also a kms atomic
modesetting compatible driver (unlike the current in-tree display
code).
I've kept it separate from drm-next because it may have some things
that cause you to reject it.
Background story:
AMD have an internal team creating a shared OS codebase for display at
hw bring up time using information from their hardware teams. This
process doesn't lead to the most Linux friendly/looking code but we
have worked together on cleaning a lot of it up and dealing with
sparse/smatch/checkpatch, and having their team internally adhere to
Linux coding standards.
This tree is a complete history rebased since they started opening it,
we decided not to squash it down as the history may have some value.
Some of the commits therefore might not reach kernel standards, and we
are steadily training people in AMD to better write commit msgs.
There is a major bunch of generated bandwidth calculation and
verification code that comes from their hardware team. On Vega and
before this is float calculations, on Raven (DCN10) this is double
based. They do the required things to do FP in the kernel, and I could
understand this might raise some issues. Rewriting the bandwidth would
be a major undertaken in reverification, it's non-trivial to work out
if a display can handle the complete set of mode information thrown at
it.
Future story:
There is a TODO list with this, and it address most of the remaining
things that would be nice to refine/remove. The DCN10 code is still
under development internally and they push out a lot of patches quite
regularly and are supporting this code base with their display team. I
think we've reached the point where keeping it out of tree is going to
motivate distributions to start carrying the code, so I'd prefer we
get it in tree. I think this code is slightly better than STAGING
quality but not massively so, I'd really like to see that float/double
magic gone and fixed point used, but AMD don't seem to think the
accuracy and revalidation of the code is worth the effort"
* tag 'drm-for-v4.15-amd-dc' of git://people.freedesktop.org/~airlied/linux: (1110 commits)
drm/amd/display: fix MST link training fail division by 0
drm/amd/display: Fix formatting for null pointer dereference fix
drm/amd/display: Remove dangling planes on dc commit state
drm/amd/display: add flip_immediate to commit update for stream
drm/amd/display: Miss register MST encoder cbs
drm/amd/display: Fix warnings on S3 resume
drm/amd/display: use num_timing_generator instead of pipe_count
drm/amd/display: use configurable FBC option in dm
drm/amd/display: fix AZ clock not enabled before program AZ endpoint
amdgpu/dm: Don't use DRM_ERROR in amdgpu_dm_atomic_check
amd/display: Fix potential null dereference in dce_calcs.c
amdgpu/dm: Remove unused forward declaration
drm/amdgpu: Remove unused dc_stream from amdgpu_crtc
amdgpu/dc: Fix double unlock in amdgpu_dm_commit_planes
amdgpu/dc: Fix missing null checks in amdgpu_dm.c
amdgpu/dc: Fix potential null dereferences in amdgpu_dm.c
amdgpu/dc: fix more indentation warnings
amdgpu/dc: handle allocation failures in dc_commit_planes_to_stream.
amdgpu/dc: fix indentation warning from smatch.
amdgpu/dc: fix non-ansi function decls.
...
Diffstat (limited to 'drivers/gpu/drm/amd/display/dc/dce112')
7 files changed, 2485 insertions, 0 deletions
diff --git a/drivers/gpu/drm/amd/display/dc/dce112/Makefile b/drivers/gpu/drm/amd/display/dc/dce112/Makefile new file mode 100644 index 000000000000..265ac4310d85 --- /dev/null +++ b/drivers/gpu/drm/amd/display/dc/dce112/Makefile @@ -0,0 +1,10 @@ +# +# Makefile for the 'controller' sub-component of DAL. +# It provides the control and status of HW CRTC block. + +DCE112 = dce112_compressor.o dce112_hw_sequencer.o \ +dce112_resource.o + +AMD_DAL_DCE112 = $(addprefix $(AMDDALPATH)/dc/dce112/,$(DCE112)) + +AMD_DISPLAY_FILES += $(AMD_DAL_DCE112) diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_compressor.c b/drivers/gpu/drm/amd/display/dc/dce112/dce112_compressor.c new file mode 100644 index 000000000000..69649928768c --- /dev/null +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_compressor.c @@ -0,0 +1,854 @@ +/* + * Copyright 2012-15 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#include "dm_services.h" + +#include "dce/dce_11_2_d.h" +#include "dce/dce_11_2_sh_mask.h" +#include "gmc/gmc_8_1_sh_mask.h" +#include "gmc/gmc_8_1_d.h" + +#include "include/logger_interface.h" + +#include "dce112_compressor.h" + +#define DCP_REG(reg)\ + (reg + cp110->offsets.dcp_offset) +#define DMIF_REG(reg)\ + (reg + cp110->offsets.dmif_offset) + +static const struct dce112_compressor_reg_offsets reg_offsets[] = { +{ + .dcp_offset = (mmDCP0_GRPH_CONTROL - mmDCP0_GRPH_CONTROL), + .dmif_offset = + (mmDMIF_PG0_DPG_PIPE_DPM_CONTROL + - mmDMIF_PG0_DPG_PIPE_DPM_CONTROL), +}, +{ + .dcp_offset = (mmDCP1_GRPH_CONTROL - mmDCP0_GRPH_CONTROL), + .dmif_offset = + (mmDMIF_PG1_DPG_PIPE_DPM_CONTROL + - mmDMIF_PG0_DPG_PIPE_DPM_CONTROL), +}, +{ + .dcp_offset = (mmDCP2_GRPH_CONTROL - mmDCP0_GRPH_CONTROL), + .dmif_offset = + (mmDMIF_PG2_DPG_PIPE_DPM_CONTROL + - mmDMIF_PG0_DPG_PIPE_DPM_CONTROL), +} +}; + +static const uint32_t dce11_one_lpt_channel_max_resolution = 2560 * 1600; + +enum fbc_idle_force { + /* Bit 0 - Display registers updated */ + FBC_IDLE_FORCE_DISPLAY_REGISTER_UPDATE = 0x00000001, + + /* Bit 2 - FBC_GRPH_COMP_EN register updated */ + FBC_IDLE_FORCE_GRPH_COMP_EN = 0x00000002, + /* Bit 3 - FBC_SRC_SEL register updated */ + FBC_IDLE_FORCE_SRC_SEL_CHANGE = 0x00000004, + /* Bit 4 - FBC_MIN_COMPRESSION register updated */ + FBC_IDLE_FORCE_MIN_COMPRESSION_CHANGE = 0x00000008, + /* Bit 5 - FBC_ALPHA_COMP_EN register updated */ + FBC_IDLE_FORCE_ALPHA_COMP_EN = 0x00000010, + /* Bit 6 - FBC_ZERO_ALPHA_CHUNK_SKIP_EN register updated */ + FBC_IDLE_FORCE_ZERO_ALPHA_CHUNK_SKIP_EN = 0x00000020, + /* Bit 7 - FBC_FORCE_COPY_TO_COMP_BUF register updated */ + FBC_IDLE_FORCE_FORCE_COPY_TO_COMP_BUF = 0x00000040, + + /* Bit 24 - Memory write to region 0 defined by MC registers. */ + FBC_IDLE_FORCE_MEMORY_WRITE_TO_REGION0 = 0x01000000, + /* Bit 25 - Memory write to region 1 defined by MC registers */ + FBC_IDLE_FORCE_MEMORY_WRITE_TO_REGION1 = 0x02000000, + /* Bit 26 - Memory write to region 2 defined by MC registers */ + FBC_IDLE_FORCE_MEMORY_WRITE_TO_REGION2 = 0x04000000, + /* Bit 27 - Memory write to region 3 defined by MC registers. */ + FBC_IDLE_FORCE_MEMORY_WRITE_TO_REGION3 = 0x08000000, + + /* Bit 28 - Memory write from any client other than MCIF */ + FBC_IDLE_FORCE_MEMORY_WRITE_OTHER_THAN_MCIF = 0x10000000, + /* Bit 29 - CG statics screen signal is inactive */ + FBC_IDLE_FORCE_CG_STATIC_SCREEN_IS_INACTIVE = 0x20000000, +}; + +static uint32_t lpt_size_alignment(struct dce112_compressor *cp110) +{ + /*LPT_ALIGNMENT (in bytes) = ROW_SIZE * #BANKS * # DRAM CHANNELS. */ + return cp110->base.raw_size * cp110->base.banks_num * + cp110->base.dram_channels_num; +} + +static uint32_t lpt_memory_control_config(struct dce112_compressor *cp110, + uint32_t lpt_control) +{ + /*LPT MC Config */ + if (cp110->base.options.bits.LPT_MC_CONFIG == 1) { + /* POSSIBLE VALUES for LPT NUM_PIPES (DRAM CHANNELS): + * 00 - 1 CHANNEL + * 01 - 2 CHANNELS + * 02 - 4 OR 6 CHANNELS + * (Only for discrete GPU, N/A for CZ) + * 03 - 8 OR 12 CHANNELS + * (Only for discrete GPU, N/A for CZ) */ + switch (cp110->base.dram_channels_num) { + case 2: + set_reg_field_value( + lpt_control, + 1, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_NUM_PIPES); + break; + case 1: + set_reg_field_value( + lpt_control, + 0, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_NUM_PIPES); + break; + default: + dm_logger_write( + cp110->base.ctx->logger, LOG_WARNING, + "%s: Invalid LPT NUM_PIPES!!!", + __func__); + break; + } + + /* The mapping for LPT NUM_BANKS is in + * GRPH_CONTROL.GRPH_NUM_BANKS register field + * Specifies the number of memory banks for tiling + * purposes. Only applies to 2D and 3D tiling modes. + * POSSIBLE VALUES: + * 00 - DCP_GRPH_NUM_BANKS_2BANK: ADDR_SURF_2_BANK + * 01 - DCP_GRPH_NUM_BANKS_4BANK: ADDR_SURF_4_BANK + * 02 - DCP_GRPH_NUM_BANKS_8BANK: ADDR_SURF_8_BANK + * 03 - DCP_GRPH_NUM_BANKS_16BANK: ADDR_SURF_16_BANK */ + switch (cp110->base.banks_num) { + case 16: + set_reg_field_value( + lpt_control, + 3, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_NUM_BANKS); + break; + case 8: + set_reg_field_value( + lpt_control, + 2, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_NUM_BANKS); + break; + case 4: + set_reg_field_value( + lpt_control, + 1, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_NUM_BANKS); + break; + case 2: + set_reg_field_value( + lpt_control, + 0, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_NUM_BANKS); + break; + default: + dm_logger_write( + cp110->base.ctx->logger, LOG_WARNING, + "%s: Invalid LPT NUM_BANKS!!!", + __func__); + break; + } + + /* The mapping is in DMIF_ADDR_CALC. + * ADDR_CONFIG_PIPE_INTERLEAVE_SIZE register field for + * Carrizo specifies the memory interleave per pipe. + * It effectively specifies the location of pipe bits in + * the memory address. + * POSSIBLE VALUES: + * 00 - ADDR_CONFIG_PIPE_INTERLEAVE_256B: 256 byte + * interleave + * 01 - ADDR_CONFIG_PIPE_INTERLEAVE_512B: 512 byte + * interleave + */ + switch (cp110->base.channel_interleave_size) { + case 256: /*256B */ + set_reg_field_value( + lpt_control, + 0, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_PIPE_INTERLEAVE_SIZE); + break; + case 512: /*512B */ + set_reg_field_value( + lpt_control, + 1, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_PIPE_INTERLEAVE_SIZE); + break; + default: + dm_logger_write( + cp110->base.ctx->logger, LOG_WARNING, + "%s: Invalid LPT INTERLEAVE_SIZE!!!", + __func__); + break; + } + + /* The mapping for LOW_POWER_TILING_ROW_SIZE is in + * DMIF_ADDR_CALC.ADDR_CONFIG_ROW_SIZE register field + * for Carrizo. Specifies the size of dram row in bytes. + * This should match up with NOOFCOLS field in + * MC_ARB_RAMCFG (ROW_SIZE = 4 * 2 ^^ columns). + * This register DMIF_ADDR_CALC is not used by the + * hardware as it is only used for addrlib assertions. + * POSSIBLE VALUES: + * 00 - ADDR_CONFIG_1KB_ROW: Treat 1KB as DRAM row + * boundary + * 01 - ADDR_CONFIG_2KB_ROW: Treat 2KB as DRAM row + * boundary + * 02 - ADDR_CONFIG_4KB_ROW: Treat 4KB as DRAM row + * boundary */ + switch (cp110->base.raw_size) { + case 4096: /*4 KB */ + set_reg_field_value( + lpt_control, + 2, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_ROW_SIZE); + break; + case 2048: + set_reg_field_value( + lpt_control, + 1, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_ROW_SIZE); + break; + case 1024: + set_reg_field_value( + lpt_control, + 0, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_ROW_SIZE); + break; + default: + dm_logger_write( + cp110->base.ctx->logger, LOG_WARNING, + "%s: Invalid LPT ROW_SIZE!!!", + __func__); + break; + } + } else { + dm_logger_write( + cp110->base.ctx->logger, LOG_WARNING, + "%s: LPT MC Configuration is not provided", + __func__); + } + + return lpt_control; +} + +static bool is_source_bigger_than_epanel_size( + struct dce112_compressor *cp110, + uint32_t source_view_width, + uint32_t source_view_height) +{ + if (cp110->base.embedded_panel_h_size != 0 && + cp110->base.embedded_panel_v_size != 0 && + ((source_view_width * source_view_height) > + (cp110->base.embedded_panel_h_size * + cp110->base.embedded_panel_v_size))) + return true; + + return false; +} + +static uint32_t align_to_chunks_number_per_line( + struct dce112_compressor *cp110, + uint32_t pixels) +{ + return 256 * ((pixels + 255) / 256); +} + +static void wait_for_fbc_state_changed( + struct dce112_compressor *cp110, + bool enabled) +{ + uint8_t counter = 0; + uint32_t addr = mmFBC_STATUS; + uint32_t value; + + while (counter < 10) { + value = dm_read_reg(cp110->base.ctx, addr); + if (get_reg_field_value( + value, + FBC_STATUS, + FBC_ENABLE_STATUS) == enabled) + break; + udelay(10); + counter++; + } + + if (counter == 10) { + dm_logger_write( + cp110->base.ctx->logger, LOG_WARNING, + "%s: wait counter exceeded, changes to HW not applied", + __func__); + } +} + +void dce112_compressor_power_up_fbc(struct compressor *compressor) +{ + uint32_t value; + uint32_t addr; + + addr = mmFBC_CNTL; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value(value, 0, FBC_CNTL, FBC_GRPH_COMP_EN); + set_reg_field_value(value, 1, FBC_CNTL, FBC_EN); + set_reg_field_value(value, 2, FBC_CNTL, FBC_COHERENCY_MODE); + if (compressor->options.bits.CLK_GATING_DISABLED == 1) { + /* HW needs to do power measurement comparison. */ + set_reg_field_value( + value, + 0, + FBC_CNTL, + FBC_COMP_CLK_GATE_EN); + } + dm_write_reg(compressor->ctx, addr, value); + + addr = mmFBC_COMP_MODE; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value(value, 1, FBC_COMP_MODE, FBC_RLE_EN); + set_reg_field_value(value, 1, FBC_COMP_MODE, FBC_DPCM4_RGB_EN); + set_reg_field_value(value, 1, FBC_COMP_MODE, FBC_IND_EN); + dm_write_reg(compressor->ctx, addr, value); + + addr = mmFBC_COMP_CNTL; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value(value, 1, FBC_COMP_CNTL, FBC_DEPTH_RGB08_EN); + dm_write_reg(compressor->ctx, addr, value); + /*FBC_MIN_COMPRESSION 0 ==> 2:1 */ + /* 1 ==> 4:1 */ + /* 2 ==> 8:1 */ + /* 0xF ==> 1:1 */ + set_reg_field_value(value, 0xF, FBC_COMP_CNTL, FBC_MIN_COMPRESSION); + dm_write_reg(compressor->ctx, addr, value); + compressor->min_compress_ratio = FBC_COMPRESS_RATIO_1TO1; + + value = 0; + dm_write_reg(compressor->ctx, mmFBC_IND_LUT0, value); + + value = 0xFFFFFF; + dm_write_reg(compressor->ctx, mmFBC_IND_LUT1, value); +} + +void dce112_compressor_enable_fbc( + struct compressor *compressor, + uint32_t paths_num, + struct compr_addr_and_pitch_params *params) +{ + struct dce112_compressor *cp110 = TO_DCE112_COMPRESSOR(compressor); + + if (compressor->options.bits.FBC_SUPPORT && + (compressor->options.bits.DUMMY_BACKEND == 0) && + (!dce112_compressor_is_fbc_enabled_in_hw(compressor, NULL)) && + (!is_source_bigger_than_epanel_size( + cp110, + params->source_view_width, + params->source_view_height))) { + + uint32_t addr; + uint32_t value; + + /* Before enabling FBC first need to enable LPT if applicable + * LPT state should always be changed (enable/disable) while FBC + * is disabled */ + if (compressor->options.bits.LPT_SUPPORT && (paths_num < 2) && + (params->source_view_width * + params->source_view_height <= + dce11_one_lpt_channel_max_resolution)) { + dce112_compressor_enable_lpt(compressor); + } + + addr = mmFBC_CNTL; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value(value, 1, FBC_CNTL, FBC_GRPH_COMP_EN); + set_reg_field_value( + value, + params->inst, + FBC_CNTL, FBC_SRC_SEL); + dm_write_reg(compressor->ctx, addr, value); + + /* Keep track of enum controller_id FBC is attached to */ + compressor->is_enabled = true; + compressor->attached_inst = params->inst; + cp110->offsets = reg_offsets[params->inst]; + + /*Toggle it as there is bug in HW */ + set_reg_field_value(value, 0, FBC_CNTL, FBC_GRPH_COMP_EN); + dm_write_reg(compressor->ctx, addr, value); + set_reg_field_value(value, 1, FBC_CNTL, FBC_GRPH_COMP_EN); + dm_write_reg(compressor->ctx, addr, value); + + wait_for_fbc_state_changed(cp110, true); + } +} + +void dce112_compressor_disable_fbc(struct compressor *compressor) +{ + struct dce112_compressor *cp110 = TO_DCE112_COMPRESSOR(compressor); + + if (compressor->options.bits.FBC_SUPPORT && + dce112_compressor_is_fbc_enabled_in_hw(compressor, NULL)) { + uint32_t reg_data; + /* Turn off compression */ + reg_data = dm_read_reg(compressor->ctx, mmFBC_CNTL); + set_reg_field_value(reg_data, 0, FBC_CNTL, FBC_GRPH_COMP_EN); + dm_write_reg(compressor->ctx, mmFBC_CNTL, reg_data); + + /* Reset enum controller_id to undefined */ + compressor->attached_inst = 0; + compressor->is_enabled = false; + + /* Whenever disabling FBC make sure LPT is disabled if LPT + * supported */ + if (compressor->options.bits.LPT_SUPPORT) + dce112_compressor_disable_lpt(compressor); + + wait_for_fbc_state_changed(cp110, false); + } +} + +bool dce112_compressor_is_fbc_enabled_in_hw( + struct compressor *compressor, + uint32_t *inst) +{ + /* Check the hardware register */ + uint32_t value; + + value = dm_read_reg(compressor->ctx, mmFBC_STATUS); + if (get_reg_field_value(value, FBC_STATUS, FBC_ENABLE_STATUS)) { + if (inst != NULL) + *inst = compressor->attached_inst; + return true; + } + + value = dm_read_reg(compressor->ctx, mmFBC_MISC); + if (get_reg_field_value(value, FBC_MISC, FBC_STOP_ON_HFLIP_EVENT)) { + value = dm_read_reg(compressor->ctx, mmFBC_CNTL); + + if (get_reg_field_value(value, FBC_CNTL, FBC_GRPH_COMP_EN)) { + if (inst != NULL) + *inst = + compressor->attached_inst; + return true; + } + } + return false; +} + +bool dce112_compressor_is_lpt_enabled_in_hw(struct compressor *compressor) +{ + /* Check the hardware register */ + uint32_t value = dm_read_reg(compressor->ctx, + mmLOW_POWER_TILING_CONTROL); + + return get_reg_field_value( + value, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_ENABLE); +} + +void dce112_compressor_program_compressed_surface_address_and_pitch( + struct compressor *compressor, + struct compr_addr_and_pitch_params *params) +{ + struct dce112_compressor *cp110 = TO_DCE112_COMPRESSOR(compressor); + uint32_t value = 0; + uint32_t fbc_pitch = 0; + uint32_t compressed_surf_address_low_part = + compressor->compr_surface_address.addr.low_part; + + /* Clear content first. */ + dm_write_reg( + compressor->ctx, + DCP_REG(mmGRPH_COMPRESS_SURFACE_ADDRESS_HIGH), + 0); + dm_write_reg(compressor->ctx, + DCP_REG(mmGRPH_COMPRESS_SURFACE_ADDRESS), 0); + + if (compressor->options.bits.LPT_SUPPORT) { + uint32_t lpt_alignment = lpt_size_alignment(cp110); + + if (lpt_alignment != 0) { + compressed_surf_address_low_part = + ((compressed_surf_address_low_part + + (lpt_alignment - 1)) / lpt_alignment) + * lpt_alignment; + } + } + + /* Write address, HIGH has to be first. */ + dm_write_reg(compressor->ctx, + DCP_REG(mmGRPH_COMPRESS_SURFACE_ADDRESS_HIGH), + compressor->compr_surface_address.addr.high_part); + dm_write_reg(compressor->ctx, + DCP_REG(mmGRPH_COMPRESS_SURFACE_ADDRESS), + compressed_surf_address_low_part); + + fbc_pitch = align_to_chunks_number_per_line( + cp110, + params->source_view_width); + + if (compressor->min_compress_ratio == FBC_COMPRESS_RATIO_1TO1) + fbc_pitch = fbc_pitch / 8; + else + dm_logger_write( + compressor->ctx->logger, LOG_WARNING, + "%s: Unexpected DCE11 compression ratio", + __func__); + + /* Clear content first. */ + dm_write_reg(compressor->ctx, DCP_REG(mmGRPH_COMPRESS_PITCH), 0); + + /* Write FBC Pitch. */ + set_reg_field_value( + value, + fbc_pitch, + GRPH_COMPRESS_PITCH, + GRPH_COMPRESS_PITCH); + dm_write_reg(compressor->ctx, DCP_REG(mmGRPH_COMPRESS_PITCH), value); + +} + +void dce112_compressor_disable_lpt(struct compressor *compressor) +{ + struct dce112_compressor *cp110 = TO_DCE112_COMPRESSOR(compressor); + uint32_t value; + uint32_t addr; + uint32_t inx; + + /* Disable all pipes LPT Stutter */ + for (inx = 0; inx < 3; inx++) { + value = + dm_read_reg( + compressor->ctx, + DMIF_REG(mmDPG_PIPE_STUTTER_CONTROL_NONLPTCH)); + set_reg_field_value( + value, + 0, + DPG_PIPE_STUTTER_CONTROL_NONLPTCH, + STUTTER_ENABLE_NONLPTCH); + dm_write_reg( + compressor->ctx, + DMIF_REG(mmDPG_PIPE_STUTTER_CONTROL_NONLPTCH), + value); + } + /* Disable Underlay pipe LPT Stutter */ + addr = mmDPGV0_PIPE_STUTTER_CONTROL_NONLPTCH; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value( + value, + 0, + DPGV0_PIPE_STUTTER_CONTROL_NONLPTCH, + STUTTER_ENABLE_NONLPTCH); + dm_write_reg(compressor->ctx, addr, value); + + /* Disable LPT */ + addr = mmLOW_POWER_TILING_CONTROL; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value( + value, + 0, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_ENABLE); + dm_write_reg(compressor->ctx, addr, value); + + /* Clear selection of Channel(s) containing Compressed Surface */ + addr = mmGMCON_LPT_TARGET; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value( + value, + 0xFFFFFFFF, + GMCON_LPT_TARGET, + STCTRL_LPT_TARGET); + dm_write_reg(compressor->ctx, mmGMCON_LPT_TARGET, value); +} + +void dce112_compressor_enable_lpt(struct compressor *compressor) +{ + struct dce112_compressor *cp110 = TO_DCE112_COMPRESSOR(compressor); + uint32_t value; + uint32_t addr; + uint32_t value_control; + uint32_t channels; + + /* Enable LPT Stutter from Display pipe */ + value = dm_read_reg(compressor->ctx, + DMIF_REG(mmDPG_PIPE_STUTTER_CONTROL_NONLPTCH)); + set_reg_field_value( + value, + 1, + DPG_PIPE_STUTTER_CONTROL_NONLPTCH, + STUTTER_ENABLE_NONLPTCH); + dm_write_reg(compressor->ctx, + DMIF_REG(mmDPG_PIPE_STUTTER_CONTROL_NONLPTCH), value); + + /* Enable Underlay pipe LPT Stutter */ + addr = mmDPGV0_PIPE_STUTTER_CONTROL_NONLPTCH; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value( + value, + 1, + DPGV0_PIPE_STUTTER_CONTROL_NONLPTCH, + STUTTER_ENABLE_NONLPTCH); + dm_write_reg(compressor->ctx, addr, value); + + /* Selection of Channel(s) containing Compressed Surface: 0xfffffff + * will disable LPT. + * STCTRL_LPT_TARGETn corresponds to channel n. */ + addr = mmLOW_POWER_TILING_CONTROL; + value_control = dm_read_reg(compressor->ctx, addr); + channels = get_reg_field_value(value_control, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_MODE); + + addr = mmGMCON_LPT_TARGET; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value( + value, + channels + 1, /* not mentioned in programming guide, + but follow DCE8.1 */ + GMCON_LPT_TARGET, + STCTRL_LPT_TARGET); + dm_write_reg(compressor->ctx, addr, value); + + /* Enable LPT */ + addr = mmLOW_POWER_TILING_CONTROL; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value( + value, + 1, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_ENABLE); + dm_write_reg(compressor->ctx, addr, value); +} + +void dce112_compressor_program_lpt_control( + struct compressor *compressor, + struct compr_addr_and_pitch_params *params) +{ + struct dce112_compressor *cp110 = TO_DCE112_COMPRESSOR(compressor); + uint32_t rows_per_channel; + uint32_t lpt_alignment; + uint32_t source_view_width; + uint32_t source_view_height; + uint32_t lpt_control = 0; + + if (!compressor->options.bits.LPT_SUPPORT) + return; + + lpt_control = dm_read_reg(compressor->ctx, + mmLOW_POWER_TILING_CONTROL); + + /* POSSIBLE VALUES for Low Power Tiling Mode: + * 00 - Use channel 0 + * 01 - Use Channel 0 and 1 + * 02 - Use Channel 0,1,2,3 + * 03 - reserved */ + switch (compressor->lpt_channels_num) { + /* case 2: + * Use Channel 0 & 1 / Not used for DCE 11 */ + case 1: + /*Use Channel 0 for LPT for DCE 11 */ + set_reg_field_value( + lpt_control, + 0, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_MODE); + break; + default: + dm_logger_write( + compressor->ctx->logger, LOG_WARNING, + "%s: Invalid selected DRAM channels for LPT!!!", + __func__); + break; + } + + lpt_control = lpt_memory_control_config(cp110, lpt_control); + + /* Program LOW_POWER_TILING_ROWS_PER_CHAN field which depends on + * FBC compressed surface pitch. + * LOW_POWER_TILING_ROWS_PER_CHAN = Roundup ((Surface Height * + * Surface Pitch) / (Row Size * Number of Channels * + * Number of Banks)). */ + rows_per_channel = 0; + lpt_alignment = lpt_size_alignment(cp110); + source_view_width = + align_to_chunks_number_per_line( + cp110, + params->source_view_width); + source_view_height = (params->source_view_height + 1) & (~0x1); + + if (lpt_alignment != 0) { + rows_per_channel = source_view_width * source_view_height * 4; + rows_per_channel = + (rows_per_channel % lpt_alignment) ? + (rows_per_channel / lpt_alignment + 1) : + rows_per_channel / lpt_alignment; + } + + set_reg_field_value( + lpt_control, + rows_per_channel, + LOW_POWER_TILING_CONTROL, + LOW_POWER_TILING_ROWS_PER_CHAN); + + dm_write_reg(compressor->ctx, + mmLOW_POWER_TILING_CONTROL, lpt_control); +} + +/* + * DCE 11 Frame Buffer Compression Implementation + */ + +void dce112_compressor_set_fbc_invalidation_triggers( + struct compressor *compressor, + uint32_t fbc_trigger) +{ + /* Disable region hit event, FBC_MEMORY_REGION_MASK = 0 (bits 16-19) + * for DCE 11 regions cannot be used - does not work with S/G + */ + uint32_t addr = mmFBC_CLIENT_REGION_MASK; + uint32_t value = dm_read_reg(compressor->ctx, addr); + + set_reg_field_value( + value, + 0, + FBC_CLIENT_REGION_MASK, + FBC_MEMORY_REGION_MASK); + dm_write_reg(compressor->ctx, addr, value); + + /* Setup events when to clear all CSM entries (effectively marking + * current compressed data invalid) + * For DCE 11 CSM metadata 11111 means - "Not Compressed" + * Used as the initial value of the metadata sent to the compressor + * after invalidation, to indicate that the compressor should attempt + * to compress all chunks on the current pass. Also used when the chunk + * is not successfully written to memory. + * When this CSM value is detected, FBC reads from the uncompressed + * buffer. Set events according to passed in value, these events are + * valid for DCE11: + * - bit 0 - display register updated + * - bit 28 - memory write from any client except from MCIF + * - bit 29 - CG static screen signal is inactive + * In addition, DCE11.1 also needs to set new DCE11.1 specific events + * that are used to trigger invalidation on certain register changes, + * for example enabling of Alpha Compression may trigger invalidation of + * FBC once bit is set. These events are as follows: + * - Bit 2 - FBC_GRPH_COMP_EN register updated + * - Bit 3 - FBC_SRC_SEL register updated + * - Bit 4 - FBC_MIN_COMPRESSION register updated + * - Bit 5 - FBC_ALPHA_COMP_EN register updated + * - Bit 6 - FBC_ZERO_ALPHA_CHUNK_SKIP_EN register updated + * - Bit 7 - FBC_FORCE_COPY_TO_COMP_BUF register updated + */ + addr = mmFBC_IDLE_FORCE_CLEAR_MASK; + value = dm_read_reg(compressor->ctx, addr); + set_reg_field_value( + value, + fbc_trigger | + FBC_IDLE_FORCE_GRPH_COMP_EN | + FBC_IDLE_FORCE_SRC_SEL_CHANGE | + FBC_IDLE_FORCE_MIN_COMPRESSION_CHANGE | + FBC_IDLE_FORCE_ALPHA_COMP_EN | + FBC_IDLE_FORCE_ZERO_ALPHA_CHUNK_SKIP_EN | + FBC_IDLE_FORCE_FORCE_COPY_TO_COMP_BUF, + FBC_IDLE_FORCE_CLEAR_MASK, + FBC_IDLE_FORCE_CLEAR_MASK); + dm_write_reg(compressor->ctx, addr, value); +} + +void dce112_compressor_construct(struct dce112_compressor *compressor, + struct dc_context *ctx) +{ + struct dc_bios *bp = ctx->dc_bios; + struct embedded_panel_info panel_info; + + compressor->base.options.raw = 0; + compressor->base.options.bits.FBC_SUPPORT = true; + compressor->base.options.bits.LPT_SUPPORT = true; + /* For DCE 11 always use one DRAM channel for LPT */ + compressor->base.lpt_channels_num = 1; + compressor->base.options.bits.DUMMY_BACKEND = false; + + /* Check if this system has more than 1 DRAM channel; if only 1 then LPT + * should not be supported */ + if (compressor->base.memory_bus_width == 64) + compressor->base.options.bits.LPT_SUPPORT = false; + + compressor->base.options.bits.CLK_GATING_DISABLED = false; + + compressor->base.ctx = ctx; + compressor->base.embedded_panel_h_size = 0; + compressor->base.embedded_panel_v_size = 0; + compressor->base.memory_bus_width = ctx->asic_id.vram_width; + compressor->base.allocated_size = 0; + compressor->base.preferred_requested_size = 0; + compressor->base.min_compress_ratio = FBC_COMPRESS_RATIO_INVALID; + compressor->base.banks_num = 0; + compressor->base.raw_size = 0; + compressor->base.channel_interleave_size = 0; + compressor->base.dram_channels_num = 0; + compressor->base.lpt_channels_num = 0; + compressor->base.attached_inst = 0; + compressor->base.is_enabled = false; + + if (BP_RESULT_OK == + bp->funcs->get_embedded_panel_info(bp, &panel_info)) { + compressor->base.embedded_panel_h_size = + panel_info.lcd_timing.horizontal_addressable; + compressor->base.embedded_panel_v_size = + panel_info.lcd_timing.vertical_addressable; + } +} + +struct compressor *dce112_compressor_create(struct dc_context *ctx) +{ + struct dce112_compressor *cp110 = + kzalloc(sizeof(struct dce112_compressor), GFP_KERNEL); + + if (!cp110) + return NULL; + + dce112_compressor_construct(cp110, ctx); + return &cp110->base; +} + +void dce112_compressor_destroy(struct compressor **compressor) +{ + kfree(TO_DCE112_COMPRESSOR(*compressor)); + *compressor = NULL; +} diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_compressor.h b/drivers/gpu/drm/amd/display/dc/dce112/dce112_compressor.h new file mode 100644 index 000000000000..f1227133f6df --- /dev/null +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_compressor.h @@ -0,0 +1,78 @@ +/* Copyright 2012-15 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#ifndef __DC_COMPRESSOR_DCE112_H__ +#define __DC_COMPRESSOR_DCE112_H__ + +#include "../inc/compressor.h" + +#define TO_DCE112_COMPRESSOR(compressor)\ + container_of(compressor, struct dce112_compressor, base) + +struct dce112_compressor_reg_offsets { + uint32_t dcp_offset; + uint32_t dmif_offset; +}; + +struct dce112_compressor { + struct compressor base; + struct dce112_compressor_reg_offsets offsets; +}; + +struct compressor *dce112_compressor_create(struct dc_context *ctx); + +void dce112_compressor_construct(struct dce112_compressor *cp110, + struct dc_context *ctx); + +void dce112_compressor_destroy(struct compressor **cp); + +/* FBC RELATED */ +void dce112_compressor_power_up_fbc(struct compressor *cp); + +void dce112_compressor_enable_fbc(struct compressor *cp, uint32_t paths_num, + struct compr_addr_and_pitch_params *params); + +void dce112_compressor_disable_fbc(struct compressor *cp); + +void dce112_compressor_set_fbc_invalidation_triggers(struct compressor *cp, + uint32_t fbc_trigger); + +void dce112_compressor_program_compressed_surface_address_and_pitch( + struct compressor *cp, + struct compr_addr_and_pitch_params *params); + +bool dce112_compressor_is_fbc_enabled_in_hw(struct compressor *cp, + uint32_t *fbc_mapped_crtc_id); + +/* LPT RELATED */ +void dce112_compressor_enable_lpt(struct compressor *cp); + +void dce112_compressor_disable_lpt(struct compressor *cp); + +void dce112_compressor_program_lpt_control(struct compressor *cp, + struct compr_addr_and_pitch_params *params); + +bool dce112_compressor_is_lpt_enabled_in_hw(struct compressor *cp); + +#endif diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c new file mode 100644 index 000000000000..1e4a7c13f0ed --- /dev/null +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c @@ -0,0 +1,163 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#include "dm_services.h" +#include "dc.h" +#include "core_types.h" +#include "dce112_hw_sequencer.h" + +#include "dce110/dce110_hw_sequencer.h" + +/* include DCE11.2 register header files */ +#include "dce/dce_11_2_d.h" +#include "dce/dce_11_2_sh_mask.h" + +struct dce112_hw_seq_reg_offsets { + uint32_t crtc; +}; + + +static const struct dce112_hw_seq_reg_offsets reg_offsets[] = { +{ + .crtc = (mmCRTC0_CRTC_GSL_CONTROL - mmCRTC_GSL_CONTROL), +}, +{ + .crtc = (mmCRTC1_CRTC_GSL_CONTROL - mmCRTC_GSL_CONTROL), +}, +{ + .crtc = (mmCRTC2_CRTC_GSL_CONTROL - mmCRTC_GSL_CONTROL), +}, +{ + .crtc = (mmCRTC3_CRTC_GSL_CONTROL - mmCRTC_GSL_CONTROL), +}, +{ + .crtc = (mmCRTC4_CRTC_GSL_CONTROL - mmCRTC_GSL_CONTROL), +}, +{ + .crtc = (mmCRTC5_CRTC_GSL_CONTROL - mmCRTC_GSL_CONTROL), +} +}; +#define HW_REG_CRTC(reg, id)\ + (reg + reg_offsets[id].crtc) + +/******************************************************************************* + * Private definitions + ******************************************************************************/ + +static void dce112_init_pte(struct dc_context *ctx) +{ + uint32_t addr; + uint32_t value = 0; + uint32_t chunk_int = 0; + uint32_t chunk_mul = 0; + + addr = mmDVMM_PTE_REQ; + value = dm_read_reg(ctx, addr); + + chunk_int = get_reg_field_value( + value, + DVMM_PTE_REQ, + HFLIP_PTEREQ_PER_CHUNK_INT); + + chunk_mul = get_reg_field_value( + value, + DVMM_PTE_REQ, + HFLIP_PTEREQ_PER_CHUNK_MULTIPLIER); + + if (chunk_int != 0x4 || chunk_mul != 0x4) { + + set_reg_field_value( + value, + 255, + DVMM_PTE_REQ, + MAX_PTEREQ_TO_ISSUE); + + set_reg_field_value( + value, + 4, + DVMM_PTE_REQ, + HFLIP_PTEREQ_PER_CHUNK_INT); + + set_reg_field_value( + value, + 4, + DVMM_PTE_REQ, + HFLIP_PTEREQ_PER_CHUNK_MULTIPLIER); + + dm_write_reg(ctx, addr, value); + } +} + +static bool dce112_enable_display_power_gating( + struct dc *dc, + uint8_t controller_id, + struct dc_bios *dcb, + enum pipe_gating_control power_gating) +{ + enum bp_result bp_result = BP_RESULT_OK; + enum bp_pipe_control_action cntl; + struct dc_context *ctx = dc->ctx; + + if (IS_FPGA_MAXIMUS_DC(ctx->dce_environment)) + return true; + + if (power_gating == PIPE_GATING_CONTROL_INIT) + cntl = ASIC_PIPE_INIT; + else if (power_gating == PIPE_GATING_CONTROL_ENABLE) + cntl = ASIC_PIPE_ENABLE; + else + cntl = ASIC_PIPE_DISABLE; + + if (power_gating != PIPE_GATING_CONTROL_INIT || controller_id == 0){ + + bp_result = dcb->funcs->enable_disp_power_gating( + dcb, controller_id + 1, cntl); + + /* Revert MASTER_UPDATE_MODE to 0 because bios sets it 2 + * by default when command table is called + */ + dm_write_reg(ctx, + HW_REG_CRTC(mmCRTC_MASTER_UPDATE_MODE, controller_id), + 0); + } + + if (power_gating != PIPE_GATING_CONTROL_ENABLE) + dce112_init_pte(ctx); + + if (bp_result == BP_RESULT_OK) + return true; + else + return false; +} + +void dce112_hw_sequencer_construct(struct dc *dc) +{ + /* All registers used by dce11.2 match those in dce11 in offset and + * structure + */ + dce110_hw_sequencer_construct(dc); + dc->hwss.enable_display_power_gating = dce112_enable_display_power_gating; +} + diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.h new file mode 100644 index 000000000000..e646f4a37fa2 --- /dev/null +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.h @@ -0,0 +1,36 @@ +/* +* Copyright 2012-15 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#ifndef __DC_HWSS_DCE112_H__ +#define __DC_HWSS_DCE112_H__ + +#include "core_types.h" + +struct dc; + +void dce112_hw_sequencer_construct(struct dc *dc); + +#endif /* __DC_HWSS_DCE112_H__ */ + diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c new file mode 100644 index 000000000000..663e0a047a4b --- /dev/null +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c @@ -0,0 +1,1283 @@ +/* +* Copyright 2012-15 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#include "dm_services.h" + +#include "link_encoder.h" +#include "stream_encoder.h" + +#include "resource.h" +#include "include/irq_service_interface.h" +#include "dce110/dce110_resource.h" +#include "dce110/dce110_timing_generator.h" + +#include "irq/dce110/irq_service_dce110.h" + +#include "dce/dce_mem_input.h" +#include "dce/dce_transform.h" +#include "dce/dce_link_encoder.h" +#include "dce/dce_stream_encoder.h" +#include "dce/dce_audio.h" +#include "dce/dce_opp.h" +#include "dce/dce_ipp.h" +#include "dce/dce_clocks.h" +#include "dce/dce_clock_source.h" + +#include "dce/dce_hwseq.h" +#include "dce112/dce112_hw_sequencer.h" +#include "dce/dce_abm.h" +#include "dce/dce_dmcu.h" + +#include "reg_helper.h" + +#include "dce/dce_11_2_d.h" +#include "dce/dce_11_2_sh_mask.h" + +#include "dce100/dce100_resource.h" + +#ifndef mmDP_DPHY_INTERNAL_CTRL + #define mmDP_DPHY_INTERNAL_CTRL 0x4aa7 + #define mmDP0_DP_DPHY_INTERNAL_CTRL 0x4aa7 + #define mmDP1_DP_DPHY_INTERNAL_CTRL 0x4ba7 + #define mmDP2_DP_DPHY_INTERNAL_CTRL 0x4ca7 + #define mmDP3_DP_DPHY_INTERNAL_CTRL 0x4da7 + #define mmDP4_DP_DPHY_INTERNAL_CTRL 0x4ea7 + #define mmDP5_DP_DPHY_INTERNAL_CTRL 0x4fa7 + #define mmDP6_DP_DPHY_INTERNAL_CTRL 0x54a7 + #define mmDP7_DP_DPHY_INTERNAL_CTRL 0x56a7 + #define mmDP8_DP_DPHY_INTERNAL_CTRL 0x57a7 +#endif + +#ifndef mmBIOS_SCRATCH_2 + #define mmBIOS_SCRATCH_2 0x05CB + #define mmBIOS_SCRATCH_6 0x05CF +#endif + +#ifndef mmDP_DPHY_BS_SR_SWAP_CNTL + #define mmDP_DPHY_BS_SR_SWAP_CNTL 0x4ADC + #define mmDP0_DP_DPHY_BS_SR_SWAP_CNTL 0x4ADC + #define mmDP1_DP_DPHY_BS_SR_SWAP_CNTL 0x4BDC + #define mmDP2_DP_DPHY_BS_SR_SWAP_CNTL 0x4CDC + #define mmDP3_DP_DPHY_BS_SR_SWAP_CNTL 0x4DDC + #define mmDP4_DP_DPHY_BS_SR_SWAP_CNTL 0x4EDC + #define mmDP5_DP_DPHY_BS_SR_SWAP_CNTL 0x4FDC + #define mmDP6_DP_DPHY_BS_SR_SWAP_CNTL 0x54DC +#endif + +#ifndef mmDP_DPHY_FAST_TRAINING + #define mmDP_DPHY_FAST_TRAINING 0x4ABC + #define mmDP0_DP_DPHY_FAST_TRAINING 0x4ABC + #define mmDP1_DP_DPHY_FAST_TRAINING 0x4BBC + #define mmDP2_DP_DPHY_FAST_TRAINING 0x4CBC + #define mmDP3_DP_DPHY_FAST_TRAINING 0x4DBC + #define mmDP4_DP_DPHY_FAST_TRAINING 0x4EBC + #define mmDP5_DP_DPHY_FAST_TRAINING 0x4FBC + #define mmDP6_DP_DPHY_FAST_TRAINING 0x54BC +#endif + +enum dce112_clk_src_array_id { + DCE112_CLK_SRC_PLL0, + DCE112_CLK_SRC_PLL1, + DCE112_CLK_SRC_PLL2, + DCE112_CLK_SRC_PLL3, + DCE112_CLK_SRC_PLL4, + DCE112_CLK_SRC_PLL5, + + DCE112_CLK_SRC_TOTAL +}; + +static const struct dce110_timing_generator_offsets dce112_tg_offsets[] = { + { + .crtc = (mmCRTC0_CRTC_CONTROL - mmCRTC_CONTROL), + .dcp = (mmDCP0_GRPH_CONTROL - mmGRPH_CONTROL), + }, + { + .crtc = (mmCRTC1_CRTC_CONTROL - mmCRTC_CONTROL), + .dcp = (mmDCP1_GRPH_CONTROL - mmGRPH_CONTROL), + }, + { + .crtc = (mmCRTC2_CRTC_CONTROL - mmCRTC_CONTROL), + .dcp = (mmDCP2_GRPH_CONTROL - mmGRPH_CONTROL), + }, + { + .crtc = (mmCRTC3_CRTC_CONTROL - mmCRTC_CONTROL), + .dcp = (mmDCP3_GRPH_CONTROL - mmGRPH_CONTROL), + }, + { + .crtc = (mmCRTC4_CRTC_CONTROL - mmCRTC_CONTROL), + .dcp = (mmDCP4_GRPH_CONTROL - mmGRPH_CONTROL), + }, + { + .crtc = (mmCRTC5_CRTC_CONTROL - mmCRTC_CONTROL), + .dcp = (mmDCP5_GRPH_CONTROL - mmGRPH_CONTROL), + } +}; + +/* set register offset */ +#define SR(reg_name)\ + .reg_name = mm ## reg_name + +/* set register offset with instance */ +#define SRI(reg_name, block, id)\ + .reg_name = mm ## block ## id ## _ ## reg_name + + +static const struct dce_disp_clk_registers disp_clk_regs = { + CLK_COMMON_REG_LIST_DCE_BASE() +}; + +static const struct dce_disp_clk_shift disp_clk_shift = { + CLK_COMMON_MASK_SH_LIST_DCE_COMMON_BASE(__SHIFT) +}; + +static const struct dce_disp_clk_mask disp_clk_mask = { + CLK_COMMON_MASK_SH_LIST_DCE_COMMON_BASE(_MASK) +}; + +static const struct dce_dmcu_registers dmcu_regs = { + DMCU_DCE110_COMMON_REG_LIST() +}; + +static const struct dce_dmcu_shift dmcu_shift = { + DMCU_MASK_SH_LIST_DCE110(__SHIFT) +}; + +static const struct dce_dmcu_mask dmcu_mask = { + DMCU_MASK_SH_LIST_DCE110(_MASK) +}; + +static const struct dce_abm_registers abm_regs = { + ABM_DCE110_COMMON_REG_LIST() +}; + +static const struct dce_abm_shift abm_shift = { + ABM_MASK_SH_LIST_DCE110(__SHIFT) +}; + +static const struct dce_abm_mask abm_mask = { + ABM_MASK_SH_LIST_DCE110(_MASK) +}; + +#define ipp_regs(id)\ +[id] = {\ + IPP_DCE110_REG_LIST_DCE_BASE(id)\ +} + +static const struct dce_ipp_registers ipp_regs[] = { + ipp_regs(0), + ipp_regs(1), + ipp_regs(2), + ipp_regs(3), + ipp_regs(4), + ipp_regs(5) +}; + +static const struct dce_ipp_shift ipp_shift = { + IPP_DCE100_MASK_SH_LIST_DCE_COMMON_BASE(__SHIFT) +}; + +static const struct dce_ipp_mask ipp_mask = { + IPP_DCE100_MASK_SH_LIST_DCE_COMMON_BASE(_MASK) +}; + +#define transform_regs(id)\ +[id] = {\ + XFM_COMMON_REG_LIST_DCE110(id)\ +} + +static const struct dce_transform_registers xfm_regs[] = { + transform_regs(0), + transform_regs(1), + transform_regs(2), + transform_regs(3), + transform_regs(4), + transform_regs(5) +}; + +static const struct dce_transform_shift xfm_shift = { + XFM_COMMON_MASK_SH_LIST_DCE110(__SHIFT) +}; + +static const struct dce_transform_mask xfm_mask = { + XFM_COMMON_MASK_SH_LIST_DCE110(_MASK) +}; + +#define aux_regs(id)\ +[id] = {\ + AUX_REG_LIST(id)\ +} + +static const struct dce110_link_enc_aux_registers link_enc_aux_regs[] = { + aux_regs(0), + aux_regs(1), + aux_regs(2), + aux_regs(3), + aux_regs(4), + aux_regs(5) +}; + +#define hpd_regs(id)\ +[id] = {\ + HPD_REG_LIST(id)\ +} + +static const struct dce110_link_enc_hpd_registers link_enc_hpd_regs[] = { + hpd_regs(0), + hpd_regs(1), + hpd_regs(2), + hpd_regs(3), + hpd_regs(4), + hpd_regs(5) +}; + +#define link_regs(id)\ +[id] = {\ + LE_DCE110_REG_LIST(id)\ +} + +static const struct dce110_link_enc_registers link_enc_regs[] = { + link_regs(0), + link_regs(1), + link_regs(2), + link_regs(3), + link_regs(4), + link_regs(5), + link_regs(6), +}; + +#define stream_enc_regs(id)\ +[id] = {\ + SE_COMMON_REG_LIST(id),\ + .TMDS_CNTL = 0,\ +} + +static const struct dce110_stream_enc_registers stream_enc_regs[] = { + stream_enc_regs(0), + stream_enc_regs(1), + stream_enc_regs(2), + stream_enc_regs(3), + stream_enc_regs(4), + stream_enc_regs(5) +}; + +static const struct dce_stream_encoder_shift se_shift = { + SE_COMMON_MASK_SH_LIST_DCE112(__SHIFT) +}; + +static const struct dce_stream_encoder_mask se_mask = { + SE_COMMON_MASK_SH_LIST_DCE112(_MASK) +}; + +#define opp_regs(id)\ +[id] = {\ + OPP_DCE_112_REG_LIST(id),\ +} + +static const struct dce_opp_registers opp_regs[] = { + opp_regs(0), + opp_regs(1), + opp_regs(2), + opp_regs(3), + opp_regs(4), + opp_regs(5) +}; + +static const struct dce_opp_shift opp_shift = { + OPP_COMMON_MASK_SH_LIST_DCE_112(__SHIFT) +}; + +static const struct dce_opp_mask opp_mask = { + OPP_COMMON_MASK_SH_LIST_DCE_112(_MASK) +}; + +#define audio_regs(id)\ +[id] = {\ + AUD_COMMON_REG_LIST(id)\ +} + +static const struct dce_audio_registers audio_regs[] = { + audio_regs(0), + audio_regs(1), + audio_regs(2), + audio_regs(3), + audio_regs(4), + audio_regs(5) +}; + +static const struct dce_audio_shift audio_shift = { + AUD_COMMON_MASK_SH_LIST(__SHIFT) +}; + +static const struct dce_aduio_mask audio_mask = { + AUD_COMMON_MASK_SH_LIST(_MASK) +}; + +#define clk_src_regs(index, id)\ +[index] = {\ + CS_COMMON_REG_LIST_DCE_112(id),\ +} + +static const struct dce110_clk_src_regs clk_src_regs[] = { + clk_src_regs(0, A), + clk_src_regs(1, B), + clk_src_regs(2, C), + clk_src_regs(3, D), + clk_src_regs(4, E), + clk_src_regs(5, F) +}; + +static const struct dce110_clk_src_shift cs_shift = { + CS_COMMON_MASK_SH_LIST_DCE_112(__SHIFT) +}; + +static const struct dce110_clk_src_mask cs_mask = { + CS_COMMON_MASK_SH_LIST_DCE_112(_MASK) +}; + +static const struct bios_registers bios_regs = { + .BIOS_SCRATCH_6 = mmBIOS_SCRATCH_6 +}; + +static const struct resource_caps polaris_10_resource_cap = { + .num_timing_generator = 6, + .num_audio = 6, + .num_stream_encoder = 6, + .num_pll = 8, /* why 8? 6 combo PHY PLL + 2 regular PLLs? */ +}; + +static const struct resource_caps polaris_11_resource_cap = { + .num_timing_generator = 5, + .num_audio = 5, + .num_stream_encoder = 5, + .num_pll = 8, /* why 8? 6 combo PHY PLL + 2 regular PLLs? */ +}; + +#define CTX ctx +#define REG(reg) mm ## reg + +#ifndef mmCC_DC_HDMI_STRAPS +#define mmCC_DC_HDMI_STRAPS 0x4819 +#define CC_DC_HDMI_STRAPS__HDMI_DISABLE_MASK 0x40 +#define CC_DC_HDMI_STRAPS__HDMI_DISABLE__SHIFT 0x6 +#define CC_DC_HDMI_STRAPS__AUDIO_STREAM_NUMBER_MASK 0x700 +#define CC_DC_HDMI_STRAPS__AUDIO_STREAM_NUMBER__SHIFT 0x8 +#endif + +static void read_dce_straps( + struct dc_context *ctx, + struct resource_straps *straps) +{ + REG_GET_2(CC_DC_HDMI_STRAPS, + HDMI_DISABLE, &straps->hdmi_disable, + AUDIO_STREAM_NUMBER, &straps->audio_stream_number); + + REG_GET(DC_PINSTRAPS, DC_PINSTRAPS_AUDIO, &straps->dc_pinstraps_audio); +} + +static struct audio *create_audio( + struct dc_context *ctx, unsigned int inst) +{ + return dce_audio_create(ctx, inst, + &audio_regs[inst], &audio_shift, &audio_mask); +} + + +static struct timing_generator *dce112_timing_generator_create( + struct dc_context *ctx, + uint32_t instance, + const struct dce110_timing_generator_offsets *offsets) +{ + struct dce110_timing_generator *tg110 = + kzalloc(sizeof(struct dce110_timing_generator), GFP_KERNEL); + + if (!tg110) + return NULL; + + dce110_timing_generator_construct(tg110, ctx, instance, offsets); + return &tg110->base; +} + +static struct stream_encoder *dce112_stream_encoder_create( + enum engine_id eng_id, + struct dc_context *ctx) +{ + struct dce110_stream_encoder *enc110 = + kzalloc(sizeof(struct dce110_stream_encoder), GFP_KERNEL); + + if (!enc110) + return NULL; + + dce110_stream_encoder_construct(enc110, ctx, ctx->dc_bios, eng_id, + &stream_enc_regs[eng_id], + &se_shift, &se_mask); + return &enc110->base; +} + +#define SRII(reg_name, block, id)\ + .reg_name[id] = mm ## block ## id ## _ ## reg_name + +static const struct dce_hwseq_registers hwseq_reg = { + HWSEQ_DCE112_REG_LIST() +}; + +static const struct dce_hwseq_shift hwseq_shift = { + HWSEQ_DCE112_MASK_SH_LIST(__SHIFT) +}; + +static const struct dce_hwseq_mask hwseq_mask = { + HWSEQ_DCE112_MASK_SH_LIST(_MASK) +}; + +static struct dce_hwseq *dce112_hwseq_create( + struct dc_context *ctx) +{ + struct dce_hwseq *hws = kzalloc(sizeof(struct dce_hwseq), GFP_KERNEL); + + if (hws) { + hws->ctx = ctx; + hws->regs = &hwseq_reg; + hws->shifts = &hwseq_shift; + hws->masks = &hwseq_mask; + } + return hws; +} + +static const struct resource_create_funcs res_create_funcs = { + .read_dce_straps = read_dce_straps, + .create_audio = create_audio, + .create_stream_encoder = dce112_stream_encoder_create, + .create_hwseq = dce112_hwseq_create, +}; + +#define mi_inst_regs(id) { MI_DCE11_2_REG_LIST(id) } +static const struct dce_mem_input_registers mi_regs[] = { + mi_inst_regs(0), + mi_inst_regs(1), + mi_inst_regs(2), + mi_inst_regs(3), + mi_inst_regs(4), + mi_inst_regs(5), +}; + +static const struct dce_mem_input_shift mi_shifts = { + MI_DCE11_2_MASK_SH_LIST(__SHIFT) +}; + +static const struct dce_mem_input_mask mi_masks = { + MI_DCE11_2_MASK_SH_LIST(_MASK) +}; + +static struct mem_input *dce112_mem_input_create( + struct dc_context *ctx, + uint32_t inst) +{ + struct dce_mem_input *dce_mi = kzalloc(sizeof(struct dce_mem_input), + GFP_KERNEL); + + if (!dce_mi) { + BREAK_TO_DEBUGGER(); + return NULL; + } + + dce112_mem_input_construct(dce_mi, ctx, inst, &mi_regs[inst], &mi_shifts, &mi_masks); + return &dce_mi->base; +} + +static void dce112_transform_destroy(struct transform **xfm) +{ + kfree(TO_DCE_TRANSFORM(*xfm)); + *xfm = NULL; +} + +static struct transform *dce112_transform_create( + struct dc_context *ctx, + uint32_t inst) +{ + struct dce_transform *transform = + kzalloc(sizeof(struct dce_transform), GFP_KERNEL); + + if (!transform) + return NULL; + + dce_transform_construct(transform, ctx, inst, + &xfm_regs[inst], &xfm_shift, &xfm_mask); + transform->lb_memory_size = 0x1404; /*5124*/ + return &transform->base; +} + +static const struct encoder_feature_support link_enc_feature = { + .max_hdmi_deep_color = COLOR_DEPTH_121212, + .max_hdmi_pixel_clock = 600000, + .ycbcr420_supported = true, + .flags.bits.IS_HBR2_CAPABLE = true, + .flags.bits.IS_HBR3_CAPABLE = true, + .flags.bits.IS_TPS3_CAPABLE = true, + .flags.bits.IS_TPS4_CAPABLE = true, + .flags.bits.IS_YCBCR_CAPABLE = true +}; + +struct link_encoder *dce112_link_encoder_create( + const struct encoder_init_data *enc_init_data) +{ + struct dce110_link_encoder *enc110 = + kzalloc(sizeof(struct dce110_link_encoder), GFP_KERNEL); + + if (!enc110) + return NULL; + + dce110_link_encoder_construct(enc110, + enc_init_data, + &link_enc_feature, + &link_enc_regs[enc_init_data->transmitter], + &link_enc_aux_regs[enc_init_data->channel - 1], + &link_enc_hpd_regs[enc_init_data->hpd_source]); + return &enc110->base; +} + +static struct input_pixel_processor *dce112_ipp_create( + struct dc_context *ctx, uint32_t inst) +{ + struct dce_ipp *ipp = kzalloc(sizeof(struct dce_ipp), GFP_KERNEL); + + if (!ipp) { + BREAK_TO_DEBUGGER(); + return NULL; + } + + dce_ipp_construct(ipp, ctx, inst, + &ipp_regs[inst], &ipp_shift, &ipp_mask); + return &ipp->base; +} + +struct output_pixel_processor *dce112_opp_create( + struct dc_context *ctx, + uint32_t inst) +{ + struct dce110_opp *opp = + kzalloc(sizeof(struct dce110_opp), GFP_KERNEL); + + if (!opp) + return NULL; + + dce110_opp_construct(opp, + ctx, inst, &opp_regs[inst], &opp_shift, &opp_mask); + return &opp->base; +} + +struct clock_source *dce112_clock_source_create( + struct dc_context *ctx, + struct dc_bios *bios, + enum clock_source_id id, + const struct dce110_clk_src_regs *regs, + bool dp_clk_src) +{ + struct dce110_clk_src *clk_src = + kzalloc(sizeof(struct dce110_clk_src), GFP_KERNEL); + + if (!clk_src) + return NULL; + + if (dce110_clk_src_construct(clk_src, ctx, bios, id, + regs, &cs_shift, &cs_mask)) { + clk_src->base.dp_clk_src = dp_clk_src; + return &clk_src->base; + } + + BREAK_TO_DEBUGGER(); + return NULL; +} + +void dce112_clock_source_destroy(struct clock_source **clk_src) +{ + kfree(TO_DCE110_CLK_SRC(*clk_src)); + *clk_src = NULL; +} + +static void destruct(struct dce110_resource_pool *pool) +{ + unsigned int i; + + for (i = 0; i < pool->base.pipe_count; i++) { + if (pool->base.opps[i] != NULL) + dce110_opp_destroy(&pool->base.opps[i]); + + if (pool->base.transforms[i] != NULL) + dce112_transform_destroy(&pool->base.transforms[i]); + + if (pool->base.ipps[i] != NULL) + dce_ipp_destroy(&pool->base.ipps[i]); + + if (pool->base.mis[i] != NULL) { + kfree(TO_DCE_MEM_INPUT(pool->base.mis[i])); + pool->base.mis[i] = NULL; + } + + if (pool->base.timing_generators[i] != NULL) { + kfree(DCE110TG_FROM_TG(pool->base.timing_generators[i])); + pool->base.timing_generators[i] = NULL; + } + } + + for (i = 0; i < pool->base.stream_enc_count; i++) { + if (pool->base.stream_enc[i] != NULL) + kfree(DCE110STRENC_FROM_STRENC(pool->base.stream_enc[i])); + } + + for (i = 0; i < pool->base.clk_src_count; i++) { + if (pool->base.clock_sources[i] != NULL) { + dce112_clock_source_destroy(&pool->base.clock_sources[i]); + } + } + + if (pool->base.dp_clock_source != NULL) + dce112_clock_source_destroy(&pool->base.dp_clock_source); + + for (i = 0; i < pool->base.audio_count; i++) { + if (pool->base.audios[i] != NULL) { + dce_aud_destroy(&pool->base.audios[i]); + } + } + + if (pool->base.abm != NULL) + dce_abm_destroy(&pool->base.abm); + + if (pool->base.dmcu != NULL) + dce_dmcu_destroy(&pool->base.dmcu); + + if (pool->base.display_clock != NULL) + dce_disp_clk_destroy(&pool->base.display_clock); + + if (pool->base.irqs != NULL) { + dal_irq_service_destroy(&pool->base.irqs); + } +} + +static struct clock_source *find_matching_pll( + struct resource_context *res_ctx, + const struct resource_pool *pool, + const struct dc_stream_state *const stream) +{ + switch (stream->sink->link->link_enc->transmitter) { + case TRANSMITTER_UNIPHY_A: + return pool->clock_sources[DCE112_CLK_SRC_PLL0]; + case TRANSMITTER_UNIPHY_B: + return pool->clock_sources[DCE112_CLK_SRC_PLL1]; + case TRANSMITTER_UNIPHY_C: + return pool->clock_sources[DCE112_CLK_SRC_PLL2]; + case TRANSMITTER_UNIPHY_D: + return pool->clock_sources[DCE112_CLK_SRC_PLL3]; + case TRANSMITTER_UNIPHY_E: + return pool->clock_sources[DCE112_CLK_SRC_PLL4]; + case TRANSMITTER_UNIPHY_F: + return pool->clock_sources[DCE112_CLK_SRC_PLL5]; + default: + return NULL; + }; + + return 0; +} + +static enum dc_status build_mapped_resource( + const struct dc *dc, + struct dc_state *context, + struct dc_stream_state *stream) +{ + struct pipe_ctx *pipe_ctx = resource_get_head_pipe_for_stream(&context->res_ctx, stream); + + if (!pipe_ctx) + return DC_ERROR_UNEXPECTED; + + dce110_resource_build_pipe_hw_param(pipe_ctx); + + resource_build_info_frame(pipe_ctx); + + return DC_OK; +} + +bool dce112_validate_bandwidth( + struct dc *dc, + struct dc_state *context) +{ + bool result = false; + + dm_logger_write( + dc->ctx->logger, LOG_BANDWIDTH_CALCS, + "%s: start", + __func__); + + if (bw_calcs( + dc->ctx, + dc->bw_dceip, + dc->bw_vbios, + context->res_ctx.pipe_ctx, + dc->res_pool->pipe_count, + &context->bw.dce)) + result = true; + + if (!result) + dm_logger_write(dc->ctx->logger, LOG_BANDWIDTH_VALIDATION, + "%s: Bandwidth validation failed!", + __func__); + + if (memcmp(&dc->current_state->bw.dce, + &context->bw.dce, sizeof(context->bw.dce))) { + struct log_entry log_entry; + dm_logger_open( + dc->ctx->logger, + &log_entry, + LOG_BANDWIDTH_CALCS); + dm_logger_append(&log_entry, "%s: finish,\n" + "nbpMark_b: %d nbpMark_a: %d urgentMark_b: %d urgentMark_a: %d\n" + "stutMark_b: %d stutMark_a: %d\n", + __func__, + context->bw.dce.nbp_state_change_wm_ns[0].b_mark, + context->bw.dce.nbp_state_change_wm_ns[0].a_mark, + context->bw.dce.urgent_wm_ns[0].b_mark, + context->bw.dce.urgent_wm_ns[0].a_mark, + context->bw.dce.stutter_exit_wm_ns[0].b_mark, + context->bw.dce.stutter_exit_wm_ns[0].a_mark); + dm_logger_append(&log_entry, + "nbpMark_b: %d nbpMark_a: %d urgentMark_b: %d urgentMark_a: %d\n" + "stutMark_b: %d stutMark_a: %d\n", + context->bw.dce.nbp_state_change_wm_ns[1].b_mark, + context->bw.dce.nbp_state_change_wm_ns[1].a_mark, + context->bw.dce.urgent_wm_ns[1].b_mark, + context->bw.dce.urgent_wm_ns[1].a_mark, + context->bw.dce.stutter_exit_wm_ns[1].b_mark, + context->bw.dce.stutter_exit_wm_ns[1].a_mark); + dm_logger_append(&log_entry, + "nbpMark_b: %d nbpMark_a: %d urgentMark_b: %d urgentMark_a: %d\n" + "stutMark_b: %d stutMark_a: %d stutter_mode_enable: %d\n", + context->bw.dce.nbp_state_change_wm_ns[2].b_mark, + context->bw.dce.nbp_state_change_wm_ns[2].a_mark, + context->bw.dce.urgent_wm_ns[2].b_mark, + context->bw.dce.urgent_wm_ns[2].a_mark, + context->bw.dce.stutter_exit_wm_ns[2].b_mark, + context->bw.dce.stutter_exit_wm_ns[2].a_mark, + context->bw.dce.stutter_mode_enable); + dm_logger_append(&log_entry, + "cstate: %d pstate: %d nbpstate: %d sync: %d dispclk: %d\n" + "sclk: %d sclk_sleep: %d yclk: %d blackout_recovery_time_us: %d\n", + context->bw.dce.cpuc_state_change_enable, + context->bw.dce.cpup_state_change_enable, + context->bw.dce.nbp_state_change_enable, + context->bw.dce.all_displays_in_sync, + context->bw.dce.dispclk_khz, + context->bw.dce.sclk_khz, + context->bw.dce.sclk_deep_sleep_khz, + context->bw.dce.yclk_khz, + context->bw.dce.blackout_recovery_time_us); + dm_logger_close(&log_entry); + } + return result; +} + +enum dc_status resource_map_phy_clock_resources( + const struct dc *dc, + struct dc_state *context, + struct dc_stream_state *stream) +{ + + /* acquire new resources */ + struct pipe_ctx *pipe_ctx = resource_get_head_pipe_for_stream( + &context->res_ctx, stream); + + if (!pipe_ctx) + return DC_ERROR_UNEXPECTED; + + if (dc_is_dp_signal(pipe_ctx->stream->signal) + || pipe_ctx->stream->signal == SIGNAL_TYPE_VIRTUAL) + pipe_ctx->clock_source = + dc->res_pool->dp_clock_source; + else + pipe_ctx->clock_source = find_matching_pll( + &context->res_ctx, dc->res_pool, + stream); + + if (pipe_ctx->clock_source == NULL) + return DC_NO_CLOCK_SOURCE_RESOURCE; + + resource_reference_clock_source( + &context->res_ctx, + dc->res_pool, + pipe_ctx->clock_source); + + return DC_OK; +} + +static bool dce112_validate_surface_sets( + struct dc_state *context) +{ + int i; + + for (i = 0; i < context->stream_count; i++) { + if (context->stream_status[i].plane_count == 0) + continue; + + if (context->stream_status[i].plane_count > 1) + return false; + + if (context->stream_status[i].plane_states[0]->format + >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) + return false; + } + + return true; +} + +enum dc_status dce112_add_stream_to_ctx( + struct dc *dc, + struct dc_state *new_ctx, + struct dc_stream_state *dc_stream) +{ + enum dc_status result = DC_ERROR_UNEXPECTED; + + result = resource_map_pool_resources(dc, new_ctx, dc_stream); + + if (result == DC_OK) + result = resource_map_phy_clock_resources(dc, new_ctx, dc_stream); + + + if (result == DC_OK) + result = build_mapped_resource(dc, new_ctx, dc_stream); + + return result; +} + +enum dc_status dce112_validate_guaranteed( + struct dc *dc, + struct dc_stream_state *stream, + struct dc_state *context) +{ + enum dc_status result = DC_ERROR_UNEXPECTED; + + context->streams[0] = stream; + dc_stream_retain(context->streams[0]); + context->stream_count++; + + result = resource_map_pool_resources(dc, context, stream); + + if (result == DC_OK) + result = resource_map_phy_clock_resources(dc, context, stream); + + if (result == DC_OK) + result = build_mapped_resource(dc, context, stream); + + if (result == DC_OK) { + validate_guaranteed_copy_streams( + context, dc->caps.max_streams); + result = resource_build_scaling_params_for_context(dc, context); + } + + if (result == DC_OK) + if (!dce112_validate_bandwidth(dc, context)) + result = DC_FAIL_BANDWIDTH_VALIDATE; + + return result; +} + +enum dc_status dce112_validate_global( + struct dc *dc, + struct dc_state *context) +{ + if (!dce112_validate_surface_sets(context)) + return DC_FAIL_SURFACE_VALIDATE; + + return DC_OK; +} + +static void dce112_destroy_resource_pool(struct resource_pool **pool) +{ + struct dce110_resource_pool *dce110_pool = TO_DCE110_RES_POOL(*pool); + + destruct(dce110_pool); + kfree(dce110_pool); + *pool = NULL; +} + +static const struct resource_funcs dce112_res_pool_funcs = { + .destroy = dce112_destroy_resource_pool, + .link_enc_create = dce112_link_encoder_create, + .validate_guaranteed = dce112_validate_guaranteed, + .validate_bandwidth = dce112_validate_bandwidth, + .validate_plane = dce100_validate_plane, + .add_stream_to_ctx = dce112_add_stream_to_ctx, + .validate_global = dce112_validate_global +}; + +static void bw_calcs_data_update_from_pplib(struct dc *dc) +{ + struct dm_pp_clock_levels_with_latency eng_clks = {0}; + struct dm_pp_clock_levels_with_latency mem_clks = {0}; + struct dm_pp_wm_sets_with_clock_ranges clk_ranges = {0}; + struct dm_pp_clock_levels clks = {0}; + + /*do system clock TODO PPLIB: after PPLIB implement, + * then remove old way + */ + if (!dm_pp_get_clock_levels_by_type_with_latency( + dc->ctx, + DM_PP_CLOCK_TYPE_ENGINE_CLK, + &eng_clks)) { + + /* This is only for temporary */ + dm_pp_get_clock_levels_by_type( + dc->ctx, + DM_PP_CLOCK_TYPE_ENGINE_CLK, + &clks); + /* convert all the clock fro kHz to fix point mHz */ + dc->bw_vbios->high_sclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels-1], 1000); + dc->bw_vbios->mid1_sclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels/8], 1000); + dc->bw_vbios->mid2_sclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels*2/8], 1000); + dc->bw_vbios->mid3_sclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels*3/8], 1000); + dc->bw_vbios->mid4_sclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels*4/8], 1000); + dc->bw_vbios->mid5_sclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels*5/8], 1000); + dc->bw_vbios->mid6_sclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels*6/8], 1000); + dc->bw_vbios->low_sclk = bw_frc_to_fixed( + clks.clocks_in_khz[0], 1000); + + /*do memory clock*/ + dm_pp_get_clock_levels_by_type( + dc->ctx, + DM_PP_CLOCK_TYPE_MEMORY_CLK, + &clks); + + dc->bw_vbios->low_yclk = bw_frc_to_fixed( + clks.clocks_in_khz[0] * MEMORY_TYPE_MULTIPLIER, 1000); + dc->bw_vbios->mid_yclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels>>1] * MEMORY_TYPE_MULTIPLIER, + 1000); + dc->bw_vbios->high_yclk = bw_frc_to_fixed( + clks.clocks_in_khz[clks.num_levels-1] * MEMORY_TYPE_MULTIPLIER, + 1000); + + return; + } + + /* convert all the clock fro kHz to fix point mHz TODO: wloop data */ + dc->bw_vbios->high_sclk = bw_frc_to_fixed( + eng_clks.data[eng_clks.num_levels-1].clocks_in_khz, 1000); + dc->bw_vbios->mid1_sclk = bw_frc_to_fixed( + eng_clks.data[eng_clks.num_levels/8].clocks_in_khz, 1000); + dc->bw_vbios->mid2_sclk = bw_frc_to_fixed( + eng_clks.data[eng_clks.num_levels*2/8].clocks_in_khz, 1000); + dc->bw_vbios->mid3_sclk = bw_frc_to_fixed( + eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz, 1000); + dc->bw_vbios->mid4_sclk = bw_frc_to_fixed( + eng_clks.data[eng_clks.num_levels*4/8].clocks_in_khz, 1000); + dc->bw_vbios->mid5_sclk = bw_frc_to_fixed( + eng_clks.data[eng_clks.num_levels*5/8].clocks_in_khz, 1000); + dc->bw_vbios->mid6_sclk = bw_frc_to_fixed( + eng_clks.data[eng_clks.num_levels*6/8].clocks_in_khz, 1000); + dc->bw_vbios->low_sclk = bw_frc_to_fixed( + eng_clks.data[0].clocks_in_khz, 1000); + + /*do memory clock*/ + dm_pp_get_clock_levels_by_type_with_latency( + dc->ctx, + DM_PP_CLOCK_TYPE_MEMORY_CLK, + &mem_clks); + + /* we don't need to call PPLIB for validation clock since they + * also give us the highest sclk and highest mclk (UMA clock). + * ALSO always convert UMA clock (from PPLIB) to YCLK (HW formula): + * YCLK = UMACLK*m_memoryTypeMultiplier + */ + dc->bw_vbios->low_yclk = bw_frc_to_fixed( + mem_clks.data[0].clocks_in_khz * MEMORY_TYPE_MULTIPLIER, 1000); + dc->bw_vbios->mid_yclk = bw_frc_to_fixed( + mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz * MEMORY_TYPE_MULTIPLIER, + 1000); + dc->bw_vbios->high_yclk = bw_frc_to_fixed( + mem_clks.data[mem_clks.num_levels-1].clocks_in_khz * MEMORY_TYPE_MULTIPLIER, + 1000); + + /* Now notify PPLib/SMU about which Watermarks sets they should select + * depending on DPM state they are in. And update BW MGR GFX Engine and + * Memory clock member variables for Watermarks calculations for each + * Watermark Set + */ + clk_ranges.num_wm_sets = 4; + clk_ranges.wm_clk_ranges[0].wm_set_id = WM_SET_A; + clk_ranges.wm_clk_ranges[0].wm_min_eng_clk_in_khz = + eng_clks.data[0].clocks_in_khz; + clk_ranges.wm_clk_ranges[0].wm_max_eng_clk_in_khz = + eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz - 1; + clk_ranges.wm_clk_ranges[0].wm_min_memg_clk_in_khz = + mem_clks.data[0].clocks_in_khz; + clk_ranges.wm_clk_ranges[0].wm_max_mem_clk_in_khz = + mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz - 1; + + clk_ranges.wm_clk_ranges[1].wm_set_id = WM_SET_B; + clk_ranges.wm_clk_ranges[1].wm_min_eng_clk_in_khz = + eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz; + /* 5 GHz instead of data[7].clockInKHz to cover Overdrive */ + clk_ranges.wm_clk_ranges[1].wm_max_eng_clk_in_khz = 5000000; + clk_ranges.wm_clk_ranges[1].wm_min_memg_clk_in_khz = + mem_clks.data[0].clocks_in_khz; + clk_ranges.wm_clk_ranges[1].wm_max_mem_clk_in_khz = + mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz - 1; + + clk_ranges.wm_clk_ranges[2].wm_set_id = WM_SET_C; + clk_ranges.wm_clk_ranges[2].wm_min_eng_clk_in_khz = + eng_clks.data[0].clocks_in_khz; + clk_ranges.wm_clk_ranges[2].wm_max_eng_clk_in_khz = + eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz - 1; + clk_ranges.wm_clk_ranges[2].wm_min_memg_clk_in_khz = + mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz; + /* 5 GHz instead of data[2].clockInKHz to cover Overdrive */ + clk_ranges.wm_clk_ranges[2].wm_max_mem_clk_in_khz = 5000000; + + clk_ranges.wm_clk_ranges[3].wm_set_id = WM_SET_D; + clk_ranges.wm_clk_ranges[3].wm_min_eng_clk_in_khz = + eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz; + /* 5 GHz instead of data[7].clockInKHz to cover Overdrive */ + clk_ranges.wm_clk_ranges[3].wm_max_eng_clk_in_khz = 5000000; + clk_ranges.wm_clk_ranges[3].wm_min_memg_clk_in_khz = + mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz; + /* 5 GHz instead of data[2].clockInKHz to cover Overdrive */ + clk_ranges.wm_clk_ranges[3].wm_max_mem_clk_in_khz = 5000000; + + /* Notify PP Lib/SMU which Watermarks to use for which clock ranges */ + dm_pp_notify_wm_clock_changes(dc->ctx, &clk_ranges); +} + +const struct resource_caps *dce112_resource_cap( + struct hw_asic_id *asic_id) +{ + if (ASIC_REV_IS_POLARIS11_M(asic_id->hw_internal_rev) || + ASIC_REV_IS_POLARIS12_V(asic_id->hw_internal_rev)) + return &polaris_11_resource_cap; + else + return &polaris_10_resource_cap; +} + +static bool construct( + uint8_t num_virtual_links, + struct dc *dc, + struct dce110_resource_pool *pool) +{ + unsigned int i; + struct dc_context *ctx = dc->ctx; + struct dm_pp_static_clock_info static_clk_info = {0}; + + ctx->dc_bios->regs = &bios_regs; + + pool->base.res_cap = dce112_resource_cap(&ctx->asic_id); + pool->base.funcs = &dce112_res_pool_funcs; + + /************************************************* + * Resource + asic cap harcoding * + *************************************************/ + pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE; + pool->base.pipe_count = pool->base.res_cap->num_timing_generator; + dc->caps.max_downscale_ratio = 200; + dc->caps.i2c_speed_in_khz = 100; + dc->caps.max_cursor_size = 128; + + /************************************************* + * Create resources * + *************************************************/ + + pool->base.clock_sources[DCE112_CLK_SRC_PLL0] = + dce112_clock_source_create( + ctx, ctx->dc_bios, + CLOCK_SOURCE_COMBO_PHY_PLL0, + &clk_src_regs[0], false); + pool->base.clock_sources[DCE112_CLK_SRC_PLL1] = + dce112_clock_source_create( + ctx, ctx->dc_bios, + CLOCK_SOURCE_COMBO_PHY_PLL1, + &clk_src_regs[1], false); + pool->base.clock_sources[DCE112_CLK_SRC_PLL2] = + dce112_clock_source_create( + ctx, ctx->dc_bios, + CLOCK_SOURCE_COMBO_PHY_PLL2, + &clk_src_regs[2], false); + pool->base.clock_sources[DCE112_CLK_SRC_PLL3] = + dce112_clock_source_create( + ctx, ctx->dc_bios, + CLOCK_SOURCE_COMBO_PHY_PLL3, + &clk_src_regs[3], false); + pool->base.clock_sources[DCE112_CLK_SRC_PLL4] = + dce112_clock_source_create( + ctx, ctx->dc_bios, + CLOCK_SOURCE_COMBO_PHY_PLL4, + &clk_src_regs[4], false); + pool->base.clock_sources[DCE112_CLK_SRC_PLL5] = + dce112_clock_source_create( + ctx, ctx->dc_bios, + CLOCK_SOURCE_COMBO_PHY_PLL5, + &clk_src_regs[5], false); + pool->base.clk_src_count = DCE112_CLK_SRC_TOTAL; + + pool->base.dp_clock_source = dce112_clock_source_create( + ctx, ctx->dc_bios, + CLOCK_SOURCE_ID_DP_DTO, &clk_src_regs[0], true); + + + for (i = 0; i < pool->base.clk_src_count; i++) { + if (pool->base.clock_sources[i] == NULL) { + dm_error("DC: failed to create clock sources!\n"); + BREAK_TO_DEBUGGER(); + goto res_create_fail; + } + } + + pool->base.display_clock = dce112_disp_clk_create(ctx, + &disp_clk_regs, + &disp_clk_shift, + &disp_clk_mask); + if (pool->base.display_clock == NULL) { + dm_error("DC: failed to create display clock!\n"); + BREAK_TO_DEBUGGER(); + goto res_create_fail; + } + + pool->base.dmcu = dce_dmcu_create(ctx, + &dmcu_regs, + &dmcu_shift, + &dmcu_mask); + if (pool->base.dmcu == NULL) { + dm_error("DC: failed to create dmcu!\n"); + BREAK_TO_DEBUGGER(); + goto res_create_fail; + } + + pool->base.abm = dce_abm_create(ctx, + &abm_regs, + &abm_shift, + &abm_mask); + if (pool->base.abm == NULL) { + dm_error("DC: failed to create abm!\n"); + BREAK_TO_DEBUGGER(); + goto res_create_fail; + } + + /* get static clock information for PPLIB or firmware, save + * max_clock_state + */ + if (dm_pp_get_static_clocks(ctx, &static_clk_info)) + pool->base.display_clock->max_clks_state = + static_clk_info.max_clocks_state; + + { + struct irq_service_init_data init_data; + init_data.ctx = dc->ctx; + pool->base.irqs = dal_irq_service_dce110_create(&init_data); + if (!pool->base.irqs) + goto res_create_fail; + } + + for (i = 0; i < pool->base.pipe_count; i++) { + pool->base.timing_generators[i] = + dce112_timing_generator_create( + ctx, + i, + &dce112_tg_offsets[i]); + if (pool->base.timing_generators[i] == NULL) { + BREAK_TO_DEBUGGER(); + dm_error("DC: failed to create tg!\n"); + goto res_create_fail; + } + + pool->base.mis[i] = dce112_mem_input_create(ctx, i); + if (pool->base.mis[i] == NULL) { + BREAK_TO_DEBUGGER(); + dm_error( + "DC: failed to create memory input!\n"); + goto res_create_fail; + } + + pool->base.ipps[i] = dce112_ipp_create(ctx, i); + if (pool->base.ipps[i] == NULL) { + BREAK_TO_DEBUGGER(); + dm_error( + "DC:failed to create input pixel processor!\n"); + goto res_create_fail; + } + + pool->base.transforms[i] = dce112_transform_create(ctx, i); + if (pool->base.transforms[i] == NULL) { + BREAK_TO_DEBUGGER(); + dm_error( + "DC: failed to create transform!\n"); + goto res_create_fail; + } + + pool->base.opps[i] = dce112_opp_create( + ctx, + i); + if (pool->base.opps[i] == NULL) { + BREAK_TO_DEBUGGER(); + dm_error( + "DC:failed to create output pixel processor!\n"); + goto res_create_fail; + } + } + + if (!resource_construct(num_virtual_links, dc, &pool->base, + &res_create_funcs)) + goto res_create_fail; + + dc->caps.max_planes = pool->base.pipe_count; + + /* Create hardware sequencer */ + dce112_hw_sequencer_construct(dc); + + bw_calcs_init(dc->bw_dceip, dc->bw_vbios, dc->ctx->asic_id); + + bw_calcs_data_update_from_pplib(dc); + + return true; + +res_create_fail: + destruct(pool); + return false; +} + +struct resource_pool *dce112_create_resource_pool( + uint8_t num_virtual_links, + struct dc *dc) +{ + struct dce110_resource_pool *pool = + kzalloc(sizeof(struct dce110_resource_pool), GFP_KERNEL); + + if (!pool) + return NULL; + + if (construct(num_virtual_links, dc, pool)) + return &pool->base; + + BREAK_TO_DEBUGGER(); + return NULL; +} diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.h b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.h new file mode 100644 index 000000000000..d5c19d34eb0a --- /dev/null +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.h @@ -0,0 +1,61 @@ +/* +* Copyright 2012-15 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#ifndef __DC_RESOURCE_DCE112_H__ +#define __DC_RESOURCE_DCE112_H__ + +#include "core_types.h" + +struct dc; +struct resource_pool; + +struct resource_pool *dce112_create_resource_pool( + uint8_t num_virtual_links, + struct dc *dc); + +enum dc_status dce112_validate_with_context( + struct dc *dc, + const struct dc_validation_set set[], + int set_count, + struct dc_state *context, + struct dc_state *old_context); + +enum dc_status dce112_validate_guaranteed( + struct dc *dc, + struct dc_stream_state *dc_stream, + struct dc_state *context); + +bool dce112_validate_bandwidth( + struct dc *dc, + struct dc_state *context); + +enum dc_status dce112_add_stream_to_ctx( + struct dc *dc, + struct dc_state *new_ctx, + struct dc_stream_state *dc_stream); + + +#endif /* __DC_RESOURCE_DCE112_H__ */ + |