{x}
blog image

Find Valid Matrix Given Row and Column Sums

You are given two arrays rowSum and colSum of non-negative integers where rowSum[i] is the sum of the elements in the ith row and colSum[j] is the sum of the elements of the jth column of a 2D matrix. In other words, you do not know the elements of the matrix, but you do know the sums of each row and column.

Find any matrix of non-negative integers of size rowSum.length x colSum.length that satisfies the rowSum and colSum requirements.

Return a 2D array representing any matrix that fulfills the requirements. It's guaranteed that at least one matrix that fulfills the requirements exists.

 

Example 1:

Input: rowSum = [3,8], colSum = [4,7]
Output: [[3,0],
         [1,7]]
Explanation: 
0th row: 3 + 0 = 3 == rowSum[0]
1st row: 1 + 7 = 8 == rowSum[1]
0th column: 3 + 1 = 4 == colSum[0]
1st column: 0 + 7 = 7 == colSum[1]
The row and column sums match, and all matrix elements are non-negative.
Another possible matrix is: [[1,2],
                             [3,5]]

Example 2:

Input: rowSum = [5,7,10], colSum = [8,6,8]
Output: [[0,5,0],
         [6,1,0],
         [2,0,8]]

 

Constraints:

  • 1 <= rowSum.length, colSum.length <= 500
  • 0 <= rowSum[i], colSum[i] <= 108
  • sum(rowSum) == sum(colSum)

Solution Explanation: 1605. Find Valid Matrix Given Row and Column Sums

This problem asks to construct a matrix given the sum of each row and column. The solution leverages a greedy approach combined with matrix construction.

Approach:

The core idea is iterative:

  1. Initialization: Create a matrix ans of the appropriate dimensions (rows = length of rowSum, columns = length of colSum). Initially, fill it with zeros.

  2. Greedy Filling: Iterate through each cell (i, j) of the matrix. For each cell:

    • Determine the maximum value x that can be placed in the cell without exceeding either the remaining row sum (rowSum[i]) or the remaining column sum (colSum[j]). This is done using x = min(rowSum[i], colSum[j]).
    • Place x in the cell ans[i][j].
    • Update the remaining row sum and column sum: rowSum[i] -= x and colSum[j] -= x.
  3. Iteration: Repeat step 2 for all cells in the matrix.

Why this works (Correctness):

The algorithm's correctness relies on the constraint that the sum of rowSum equals the sum of colSum. At each step, we greedily fill as much as possible into a cell without violating any row or column sum constraint. Because the total sum of rows equals the total sum of columns, this process will always result in a valid matrix that satisfies the given constraints. If we couldn't fill the matrix completely, it would mean that the sum of rows and columns were unequal, contradicting the problem's constraint.

Time and Space Complexity:

  • Time Complexity: O(m * n), where 'm' is the number of rows (length of rowSum) and 'n' is the number of columns (length of colSum). This is because we iterate through each cell of the m x n matrix once.

  • Space Complexity: O(m * n). The space is primarily used to store the resulting matrix ans.

Code in Different Languages:

The code implementations below reflect the described algorithm. Note that minor differences might exist in syntax, but the core logic remains the same.

Python:

class Solution:
    def restoreMatrix(self, rowSum: List[int], colSum: List[int]) -> List[List[int]]:
        m, n = len(rowSum), len(colSum)
        ans = [[0] * n for _ in range(m)]
        for i in range(m):
            for j in range(n):
                x = min(rowSum[i], colSum[j])
                ans[i][j] = x
                rowSum[i] -= x
                colSum[j] -= x
        return ans
 

Java:

class Solution {
    public int[][] restoreMatrix(int[] rowSum, int[] colSum) {
        int m = rowSum.length;
        int n = colSum.length;
        int[][] ans = new int[m][n];
        for (int i = 0; i < m; ++i) {
            for (int j = 0; j < n; ++j) {
                int x = Math.min(rowSum[i], colSum[j]);
                ans[i][j] = x;
                rowSum[i] -= x;
                colSum[j] -= x;
            }
        }
        return ans;
    }
}

C++:

class Solution {
public:
    vector<vector<int>> restoreMatrix(vector<int>& rowSum, vector<int>& colSum) {
        int m = rowSum.size(), n = colSum.size();
        vector<vector<int>> ans(m, vector<int>(n));
        for (int i = 0; i < m; ++i) {
            for (int j = 0; j < n; ++j) {
                int x = min(rowSum[i], colSum[j]);
                ans[i][j] = x;
                rowSum[i] -= x;
                colSum[j] -= x;
            }
        }
        return ans;
    }
};

JavaScript:

var restoreMatrix = function(rowSum, colSum) {
    const m = rowSum.length;
    const n = colSum.length;
    const ans = Array.from(Array(m), () => new Array(n).fill(0));
    for (let i = 0; i < m; i++) {
        for (let j = 0; j < n; j++) {
            let x = Math.min(rowSum[i], colSum[j]);
            ans[i][j] = x;
            rowSum[i] -= x;
            colSum[j] -= x;
        }
    }
    return ans;
};

(Go, TypeScript etc. would follow a similar pattern.) The key is the nested loops iterating through the matrix and the greedy min operation to determine the cell value.