{x}
blog image

Encode Number

Solution Explanation: Encode Number

This problem asks us to encode a non-negative integer using a specific encoding scheme. Observing the table provided, we can deduce the pattern: the encoding is essentially the binary representation of (num + 1), but with the leading '1' removed.

Approach: Bit Manipulation

The most efficient way to solve this is using bit manipulation. We leverage the built-in functions in each programming language to convert integers to their binary representations.

Steps:

  1. Increment: Add 1 to the input number num. This accounts for the implicit leading '1' in the encoding scheme.

  2. Binary Conversion: Convert the incremented number to its binary representation (a string of 0s and 1s). Most programming languages provide a function for this (e.g., bin() in Python, Integer.toBinaryString() in Java, bitset::to_string() in C++).

  3. Remove Leading '1': The resulting binary string will always start with '1' (unless the original num was -1, which is outside the constraints). Remove this leading '1'. This can be done using string slicing or substring operations.

Time Complexity: O(log n)

The dominant operation is the binary conversion, which takes time proportional to the number of bits in the binary representation of num + 1. The number of bits is approximately log₂(num + 1), making the time complexity logarithmic. String slicing is a constant time operation relative to the overall size of the string.

Space Complexity: O(log n)

The space used is primarily for storing the binary string, which has a length proportional to log₂(num + 1).

Code Implementation (with explanations)

Python:

class Solution:
    def encode(self, num: int) -> str:
        return bin(num + 1)[3:] # bin() converts to binary string; [3:] slices to remove "0b1"

Java:

class Solution {
    public String encode(int num) {
        return Integer.toBinaryString(num + 1).substring(1); // substring(1) removes the first character (1)
    }
}

C++:

class Solution {
public:
    string encode(int num) {
        bitset<32> bs(++num); // Creates a 32-bit bitset
        string ans = bs.to_string(); // Converts to string representation
        int i = 0;
        while (ans[i] == '0') { // Finds the first 1
            ++i;
        }
        return ans.substr(i + 1); // Removes leading zeros and the first 1.
 
    }
};

Go:

import (
	"strconv"
)
 
func encode(num int) string {
	num++
	s := strconv.FormatInt(int64(num), 2) //Convert to binary string
	return s[1:] // slice to remove leading '1'
}

TypeScript:

function encode(num: number): string {
    return (++num).toString(2).substring(1); // concise version using toString(2) for binary and substring to remove the leading 1
}

All the code snippets follow the same logic: increment, convert to binary, and remove the leading '1'. The specific functions used for binary conversion and string manipulation may differ slightly depending on the language. The C++ version handles leading zeros more explicitly, which is good practice for robustness.