You are given two integer arrays nums1
and nums2
of length n
.
The XOR sum of the two integer arrays is (nums1[0] XOR nums2[0]) + (nums1[1] XOR nums2[1]) + ... + (nums1[n - 1] XOR nums2[n - 1])
(0-indexed).
[1,2,3]
and [3,2,1]
is equal to (1 XOR 3) + (2 XOR 2) + (3 XOR 1) = 2 + 0 + 2 = 4
.Rearrange the elements of nums2
such that the resulting XOR sum is minimized.
Return the XOR sum after the rearrangement.
Example 1:
Input: nums1 = [1,2], nums2 = [2,3] Output: 2 Explanation: Rearrangenums2
so that it becomes[3,2]
. The XOR sum is (1 XOR 3) + (2 XOR 2) = 2 + 0 = 2.
Example 2:
Input: nums1 = [1,0,3], nums2 = [5,3,4] Output: 8 Explanation: Rearrangenums2
so that it becomes[5,4,3]
. The XOR sum is (1 XOR 5) + (0 XOR 4) + (3 XOR 3) = 4 + 4 + 0 = 8.
Constraints:
n == nums1.length
n == nums2.length
1 <= n <= 14
0 <= nums1[i], nums2[i] <= 107
The problem asks to find the minimum XOR sum of two arrays nums1
and nums2
after rearranging the elements of nums2
. The XOR sum is defined as the sum of element-wise XOR operations between the two arrays. The length of both arrays is n
, where 1 <= n <= 14
.
The constraints (1 <= n <= 14) indicate that a brute-force approach checking all permutations of nums2
is computationally feasible. However, a more efficient solution uses dynamic programming and bit manipulation.
The core idea is to represent the subsets of nums2
using bitmasks. A bitmask of length n
can represent all possible subsets: a 1
at position i
indicates that nums2[i]
is included in the subset. The dynamic programming approach then iterates through the elements of nums1
and builds up the minimum XOR sum for all possible subsets of nums2
.
State: dp[i][mask]
represents the minimum XOR sum achievable using the first i
elements of nums1
and a subset of nums2
represented by the bitmask mask
.
Transition: For each element nums1[i]
, we iterate through all possible subsets of nums2
(mask
). If a bit is set in mask
(meaning the corresponding element from nums2
is included), we can calculate the XOR sum with nums1[i]
and update the dp[i+1][mask]
value accordingly.
Base Case: dp[0][0] = 0
(no elements from nums1
or nums2
used).
Final Result: dp[n][(1 << n) - 1]
(using all elements from both arrays).
class Solution:
def minimumXORSum(self, nums1: List[int], nums2: List[int]) -> int:
n = len(nums2)
f = [[inf] * (1 << n) for _ in range(n + 1)] # Initialize DP table
f[0][0] = 0 # Base case
for i, x in enumerate(nums1, 1): # Iterate through nums1
for j in range(1 << n): # Iterate through all subsets of nums2
for k in range(n): # Iterate through elements of nums2
if j >> k & 1: # Check if k-th bit is set in j (nums2[k] is in subset)
f[i][j] = min(f[i][j], f[i - 1][j ^ (1 << k)] + (x ^ nums2[k]))
# Update DP table: min(current value, previous value + XOR sum)
return f[-1][-1] # Final result
Time Complexity: O(n * 2n * n) = O(n2 * 2n). The outer loops iterate through nums1
and all subsets of nums2
, while the inner loop checks each element in nums2
.
Space Complexity: O(n * 2n) to store the DP table.
This solution optimizes space by using a 1D DP array. The outer loop iterates backward to avoid overwriting DP values.
class Solution:
def minimumXORSum(self, nums1: List[int], nums2: List[int]) -> int:
n = len(nums2)
f = [inf] * (1 << n)
f[0] = 0
for x in nums1:
for j in range((1 << n) - 1, -1, -1):
for k in range(n):
if j >> k & 1:
f[j] = min(f[j], f[j ^ (1 << k)] + (x ^ nums2[k]))
return f[-1]
Time Complexity: Same as Solution 1: O(n2 * 2n)
Space Complexity: O(2n), significantly reduced from the 2D approach.
This version further refines the approach by iterating through subsets in a more structured way, reducing redundant calculations.
class Solution:
def minimumXORSum(self, nums1: List[int], nums2: List[int]) -> int:
n = len(nums2)
f = [inf] * (1 << n)
f[0] = 0
for i in range(1, 1 << n):
k = i.bit_count() - 1
for j in range(n):
if i >> j & 1:
f[i] = min(f[i], f[i ^ (1 << j)] + (nums1[k] ^ nums2[j]))
return f[-1]
Time Complexity: Still O(n2 * 2n), but with potential performance improvements due to the iteration order.
Space Complexity: O(2n)
The provided solutions in Java, C++, Go, and TypeScript follow the same logic and complexity analysis as the Python solutions. They differ only in syntax and specific library functions used for bit manipulation and DP table initialization.