{x}
blog image

Single-Threaded CPU

You are given n​​​​​​ tasks labeled from 0 to n - 1 represented by a 2D integer array tasks, where tasks[i] = [enqueueTimei, processingTimei] means that the i​​​​​​th​​​​ task will be available to process at enqueueTimei and will take processingTimei to finish processing.

You have a single-threaded CPU that can process at most one task at a time and will act in the following way:

  • If the CPU is idle and there are no available tasks to process, the CPU remains idle.
  • If the CPU is idle and there are available tasks, the CPU will choose the one with the shortest processing time. If multiple tasks have the same shortest processing time, it will choose the task with the smallest index.
  • Once a task is started, the CPU will process the entire task without stopping.
  • The CPU can finish a task then start a new one instantly.

Return the order in which the CPU will process the tasks.

 

Example 1:

Input: tasks = [[1,2],[2,4],[3,2],[4,1]]
Output: [0,2,3,1]
Explanation: The events go as follows: 
- At time = 1, task 0 is available to process. Available tasks = {0}.
- Also at time = 1, the idle CPU starts processing task 0. Available tasks = {}.
- At time = 2, task 1 is available to process. Available tasks = {1}.
- At time = 3, task 2 is available to process. Available tasks = {1, 2}.
- Also at time = 3, the CPU finishes task 0 and starts processing task 2 as it is the shortest. Available tasks = {1}.
- At time = 4, task 3 is available to process. Available tasks = {1, 3}.
- At time = 5, the CPU finishes task 2 and starts processing task 3 as it is the shortest. Available tasks = {1}.
- At time = 6, the CPU finishes task 3 and starts processing task 1. Available tasks = {}.
- At time = 10, the CPU finishes task 1 and becomes idle.

Example 2:

Input: tasks = [[7,10],[7,12],[7,5],[7,4],[7,2]]
Output: [4,3,2,0,1]
Explanation: The events go as follows:
- At time = 7, all the tasks become available. Available tasks = {0,1,2,3,4}.
- Also at time = 7, the idle CPU starts processing task 4. Available tasks = {0,1,2,3}.
- At time = 9, the CPU finishes task 4 and starts processing task 3. Available tasks = {0,1,2}.
- At time = 13, the CPU finishes task 3 and starts processing task 2. Available tasks = {0,1}.
- At time = 18, the CPU finishes task 2 and starts processing task 0. Available tasks = {1}.
- At time = 28, the CPU finishes task 0 and starts processing task 1. Available tasks = {}.
- At time = 40, the CPU finishes task 1 and becomes idle.

 

Constraints:

  • tasks.length == n
  • 1 <= n <= 105
  • 1 <= enqueueTimei, processingTimei <= 109

Solution Explanation: Single-Threaded CPU

This problem requires scheduling tasks on a single-threaded CPU based on specific rules: prioritize shorter processing times, and if ties exist, use the smaller task index. The solution employs a combination of sorting and a priority queue (min-heap) for efficient task management.

Approach:

  1. Data Augmentation: Each task [enqueueTime, processingTime] is augmented with its original index. This is crucial for breaking ties between tasks with equal processing times. The augmented task representation becomes [enqueueTime, processingTime, index].

  2. Sorting: The tasks are sorted by enqueueTime in ascending order. This ensures that we process tasks in the order they become available.

  3. Priority Queue (Min-Heap): A priority queue is used to store available tasks. The priority is determined by processingTime, and if ties exist, by the task's index. This guarantees that the CPU always selects the task with the shortest processing time and the smallest index in case of ties.

  4. Simulation: The algorithm simulates the CPU's execution. A currentTime variable tracks the current time. The main loop continues as long as there are tasks in the priority queue or available tasks to add to it.

    • Idle CPU: If the priority queue is empty, the currentTime is updated to the earliest available task's enqueueTime or remains at the current value if no further tasks exist.

    • Add Available Tasks: All tasks whose enqueueTime is less than or equal to the currentTime are added to the priority queue.

    • Process Task: The task with the shortest processingTime (and smallest index in case of ties) is dequeued from the priority queue. Its index is appended to the result list, and currentTime is updated by adding the task's processingTime.

  5. Result: The ans list, which accumulates task indices in the order of execution, is returned.

Time Complexity Analysis:

  • Sorting the tasks: O(n log n)
  • Adding and removing tasks from the priority queue: O(n log n) in the worst case. Each task is added and removed once. The heap operations (insertion and deletion) take O(log n) time.
  • Iterating through the tasks: O(n)

Therefore, the overall time complexity is dominated by the sorting and priority queue operations, resulting in O(n log n).

Space Complexity Analysis:

  • The priority queue stores at most n tasks at a time.
  • The space used for the sorted tasks and the result list is O(n).

Thus, the overall space complexity is O(n).

Code Examples (Python):

import heapq
 
class Solution:
    def getOrder(self, tasks: List[List[int]]) -> List[int]:
        n = len(tasks)
        for i in range(n):
            tasks[i].append(i) # add original index
 
        tasks.sort() # Sort by enqueueTime
 
        ans = []
        pq = []  # Min-heap (processingTime, index)
        curr_time = 0
        task_index = 0
 
        while task_index < n or pq:
            if not pq:
                curr_time = max(curr_time, tasks[task_index][0])
 
            while task_index < n and tasks[task_index][0] <= curr_time:
                heapq.heappush(pq, (tasks[task_index][1], tasks[task_index][2]))
                task_index += 1
 
            proc_time, task_idx = heapq.heappop(pq)
            ans.append(task_idx)
            curr_time += proc_time
 
        return ans

The code in other languages (Java, C++, Go) follows the same logic with appropriate data structures and syntax adjustments. The core algorithm remains consistent across all languages.