There are n
piles of stones
arranged in a row. The ith
pile has stones[i]
stones.
A move consists of merging exactly k
consecutive piles into one pile, and the cost of this move is equal to the total number of stones in these k
piles.
Return the minimum cost to merge all piles of stones into one pile. If it is impossible, return -1
.
Example 1:
Input: stones = [3,2,4,1], k = 2 Output: 20 Explanation: We start with [3, 2, 4, 1]. We merge [3, 2] for a cost of 5, and we are left with [5, 4, 1]. We merge [4, 1] for a cost of 5, and we are left with [5, 5]. We merge [5, 5] for a cost of 10, and we are left with [10]. The total cost was 20, and this is the minimum possible.
Example 2:
Input: stones = [3,2,4,1], k = 3 Output: -1 Explanation: After any merge operation, there are 2 piles left, and we can't merge anymore. So the task is impossible.
Example 3:
Input: stones = [3,5,1,2,6], k = 3 Output: 25 Explanation: We start with [3, 5, 1, 2, 6]. We merge [5, 1, 2] for a cost of 8, and we are left with [3, 8, 6]. We merge [3, 8, 6] for a cost of 17, and we are left with [17]. The total cost was 25, and this is the minimum possible.
Constraints:
n == stones.length
1 <= n <= 30
1 <= stones[i] <= 100
2 <= k <= 30
This problem asks for the minimum cost to merge n
piles of stones into one pile, where each merge operation combines exactly k
consecutive piles. The cost of each merge is the sum of stones in those piles. The solution uses dynamic programming to solve this efficiently.
Intuition:
The core idea is to build a solution bottom-up. We start by considering the minimum cost to merge small groups of piles, and use those results to calculate the cost of merging larger groups. This is a classic overlapping subproblems scenario perfectly suited for dynamic programming.
Approach:
Base Cases: The minimum cost to merge a single pile is 0. f[i][i][1] = 0
for all i
.
Recursive Relation: The DP state f[i][j][k]
represents the minimum cost to merge piles i
to j
into k
piles. To compute f[i][j][k]
, we consider all possible ways to split the piles i
to j
into two groups. Let's say we split them at index h
. Then we recursively compute the cost to merge the left group (i
to h
) into 1 pile (f[i][h][1]
) and the cost to merge the right group (h+1
to j
) into k-1
piles (f[h+1][j][k-1]
). The total cost is the sum of these two costs. We take the minimum cost over all possible split points h
.
Final Result: The final answer is f[1][n][1]
, representing the minimum cost to merge all n
piles into a single pile (1 pile).
Prefix Sum: To efficiently calculate the cost of merging a range of piles, we use prefix sums (s
). s[i]
stores the total number of stones from pile 1 to pile i
. This allows us to calculate the sum of stones in any range in O(1) time.
Impossible Case: If (n - 1)
is not divisible by (K - 1)
, it's impossible to merge all piles into one using the given constraints, so we return -1.
Time Complexity Analysis:
The DP table has dimensions (n+1) x (n+1) x (K+1). The nested loops iterate through these dimensions, resulting in a time complexity of O(n³K). The prefix sum calculation is O(n), which is dominated by the DP computation.
Space Complexity Analysis:
The space complexity is O(n²K) due to the size of the DP table f
.
Code Explanation (Python):
The Python code implements the dynamic programming approach. accumulate
from the itertools
module is used to calculate the prefix sums efficiently. The code iterates through possible lengths (l
), start indices (i
), and numbers of piles (k
), calculating the minimum cost using the recursive relation. The inf
value ensures that the minimum is correctly chosen during the process.
The Java, C++, and Go code implementations follow the same logic and algorithmic structure, but with syntax specific to each respective programming language. All implementations have the same asymptotic time and space complexities.