Similar Problems

Similar Problems not available

Delete Duplicate Folders In System - Leetcode Solution

Companies:

LeetCode:  Delete Duplicate Folders In System Leetcode Solution

Difficulty: Hard

Topics: string hash-table array  

Problem Statement:

Given a list of paths/files of a file system, you need to delete all the duplicate folders/files in it.

Example:

Input: paths = ["/a","/a/b/c","/a/b/d","/b","/b/c","/b/d"] Output: ["/a","/b","/a/b/c","/a/b/d","/b/c","/b/d"]

Solution:

We can approach this problem using a hashmap to keep track of the paths and their count. We will traverse through the given list of paths and split them into individual levels or directories. We can then add each level to the hashmap and keep track of the count of each path as we iterate through the list.

The path levels will be stored as keys in the hashmap and their count as the value. In case we come across a duplicate path with the same number of subdirectories, we will delete it from the list of paths.

We will then return the updated list of paths after deleting all the duplicate folders/files.

Code:

  1. Initialize a dictionary to store the count of occurrences of each directory

    directory = {}

  2. Traverse through the list of paths and split them into individual directories and store it in a dictionary.

    for path in paths: dirs = path.split("/") current_dir = "" for d in dirs: if d == "": continue current_dir += "/" + d if current_dir not in directory: directory[current_dir] = 0 directory[current_dir] += 1

  3. Remove the directories from the original list, if they are already present in the dictionary.

    result = [] for path in paths: current = 0 dirs = path.split("/") current_dir = "" for d in dirs: if d == "": continue current_dir += "/" + d if directory[current_dir] > 1 and current == 0: current = 1 break if current == 1: continue result.append(path) return result

Complexity analysis:

Time Complexity: We traverse through the paths twice, hence the time complexity of the above algorithm will be O(N^2) in the worst-case scenario, where N is the total number of paths in the input list.

Space Complexity: We are using a dictionary to store the count of directory, hence the space complexity will be O(N), where N is the total number of paths in the input list.

Conclusion:

We can solve the given problem using a hashmap to store the count of directories and removing the duplicates from the original list of paths. For more details about the solution, refer to the above code and the complexity analysis.

Delete Duplicate Folders In System Solution Code

1