Exercise: The Unique Tag Cleaner

Build a tag processing utility that uses sets to remove duplicates and alphabetize a list of blog tags.

Problem statement

On a blog platform, authors frequently type redundant tags (like “coding”, “c#”, “coding”, “web”). To keep the UI clean and professional, the backend must instantly filter out these duplicates and return a perfectly alphabetized list of unique tags before saving them to the database.

Task requirements

  • Create a TagProcessor class within a defined namespace.

  • Implement a CleanTags(List<string> rawTags) method that accepts a list of messy, potentially duplicate tags.

  • The method must remove all duplicate strings.

  • The method must return a new list of these unique tags, sorted alphabetically.

  • Test the implementation by passing a list with duplicates and printing the cleaned, sorted result using a foreach loop.

Constraints

  • Use a HashSet<string> to perform the duplicate filtering efficiently.

  • Initialize the HashSet<string> by passing the raw list directly into its constructor to immediately strip duplicates.

  • Convert the unique set back into a List<string> so it can be sorted.

  • Use the Sort() method on the resulting list before returning it.

Good luck trying the exercise! If you’re unsure how to proceed, check the “Solution” tab above.

Get hints

  • The HashSet<T> constructor can accept an existing collection. Passing the list directly into the constructor via new(rawTags) is the fastest way to strip duplicates without writing a loop.

  • To convert the set back to a list, you can use the List<T> constructor similarly: new(myHashSet).

  • The Sort() method modifies the list in place and does not return a value. Call it on your list on a separate line before returning the list variable.

Exercise: The Unique Tag Cleaner

Build a tag processing utility that uses sets to remove duplicates and alphabetize a list of blog tags.

Problem statement

On a blog platform, authors frequently type redundant tags (like “coding”, “c#”, “coding”, “web”). To keep the UI clean and professional, the backend must instantly filter out these duplicates and return a perfectly alphabetized list of unique tags before saving them to the database.

Task requirements

  • Create a TagProcessor class within a defined namespace.

  • Implement a CleanTags(List<string> rawTags) method that accepts a list of messy, potentially duplicate tags.

  • The method must remove all duplicate strings.

  • The method must return a new list of these unique tags, sorted alphabetically.

  • Test the implementation by passing a list with duplicates and printing the cleaned, sorted result using a foreach loop.

Constraints

  • Use a HashSet<string> to perform the duplicate filtering efficiently.

  • Initialize the HashSet<string> by passing the raw list directly into its constructor to immediately strip duplicates.

  • Convert the unique set back into a List<string> so it can be sorted.

  • Use the Sort() method on the resulting list before returning it.

Good luck trying the exercise! If you’re unsure how to proceed, check the “Solution” tab above.

Get hints

  • The HashSet<T> constructor can accept an existing collection. Passing the list directly into the constructor via new(rawTags) is the fastest way to strip duplicates without writing a loop.

  • To convert the set back to a list, you can use the List<T> constructor similarly: new(myHashSet).

  • The Sort() method modifies the list in place and does not return a value. Call it on your list on a separate line before returning the list variable.

C# 14.0
namespace BlogEngine;
using System.Collections.Generic;
public class TagProcessor
{
public List<string> CleanTags(List<string> rawTags)
{
// 1. Pass the rawTags into a new HashSet to remove duplicates
// 2. Convert the HashSet back into a List
// 3. Sort the new list alphabetically
// 4. Return the sorted list
return [];
}
}