A sentence is a string of single-space separated words where each word consists only of lowercase letters.
A word is uncommon if it appears exactly once in one of the sentences, and does not appear in the other sentence.
Given two sentences s1
and s2
, return a list of all the uncommon words. You may return the answer in any order.
Example 1:
Input: s1 = “this apple is sweet”, s2 = “this apple is sour”
Output: [“sweet”,”sour”]
Explanation:
The word "sweet"
appears only in s1
, while the word "sour"
appears only in s2
.
Example 2:
Input: s1 = “apple apple”, s2 = “banana”
Output: [“banana”]
Constraints:
1 <= s1.length, s2.length <= 200
s1
ands2
consist of lowercase English letters and spaces.s1
ands2
do not have leading or trailing spaces.- All the words in
s1
ands2
are separated by a single space.
Approach 01:
-
C++
-
Python
#include <iostream> #include <sstream> #include <vector> #include <unordered_map> #include <string> using namespace std; class Solution { public: vector<string> uncommonFromSentences(string sentence1, string sentence2) { unordered_map<string, int> wordCount; vector<string> uncommonWords; // Process both sentences together processSentence(sentence1, wordCount); processSentence(sentence2, wordCount); // Find uncommon words for (const auto& [word, count] : wordCount) { if (count == 1) { uncommonWords.push_back(word); } } return uncommonWords; } private: // Helper function to process a sentence and update word count void processSentence(const string& sentence, unordered_map<string, int>& wordCount) { stringstream ss(sentence); string word; while (ss >> word) { wordCount[word]++; } } };
class Solution: def uncommonFromSentences(self, sentence1: str, sentence2: str) -> list[str]: wordCount = {} uncommonWords = [] # Process both sentences together self.processSentence(sentence1, wordCount) self.processSentence(sentence2, wordCount) # Find uncommon words for word, count in wordCount.items(): if count == 1: uncommonWords.append(word) return uncommonWords # Helper function to process a sentence and update word count def processSentence(self, sentence: str, wordCount: dict) -> None: for word in sentence.split(): wordCount[word] = wordCount.get(word, 0) + 1
Time Complexity
- Splitting the Sentences:
The function
split
iterates over each character of the input sentences to extract words. If the lengths of the sentences are \(m\) and \(n\), respectively, this step takes \(O(m + n)\). - Counting Word Occurrences:
The function
countWords
iterates over the list of words from each sentence, which takes \(O(w1)\) and \(O(w2)\), where \(w1\) and \(w2\) are the number of words insentence1
andsentence2
, respectively. Since the number of words is bounded by the length of the sentences, this can also be considered \(O(m + n)\). - Finding Uncommon Words:
To find uncommon words, the code iterates over each word in the two word-count maps. This step takes \(O(w1 + w2)\), or \(O(m + n)\) in the worst case.
- Overall Time Complexity:
The overall time complexity is \(O(m + n)\), where \(m\) and \(n\) are the lengths of
sentence1
andsentence2
, respectively.
Space Complexity
- Space for Storing Words:
The
split
function creates a vector of words for each sentence, which requires \(O(w1 + w2)\) space for the words from both sentences. - Space for Word Count Maps:
The code stores word counts in two hash maps, which take \(O(w1 + w2)\) space in the worst case.
- Space for the Output:
The result vector, which stores the uncommon words, requires additional space that is at most \(O(w1 + w2)\) in the worst case.
- Overall Space Complexity:
The overall space complexity is \(O(m + n)\), as the space used is proportional to the input sentence lengths.
Approach 02:
-
C++
-
Python
#include <iostream> #include <sstream> #include <vector> #include <unordered_map> #include <string> using namespace std; class Solution { public: vector<string> uncommonFromSentences(string sentence1, string sentence2) { unordered_map<string, int> wordCount; vector<string> uncommonWords; // Process both sentences together processSentence(sentence1, wordCount); processSentence(sentence2, wordCount); // Find uncommon words for (const auto& [word, count] : wordCount) { if (count == 1) { uncommonWords.push_back(word); } } return uncommonWords; } private: // Helper function to process a sentence and update word count void processSentence(const string& sentence, unordered_map<string, int>& wordCount) { stringstream ss(sentence); string word; while (ss >> word) { wordCount[word]++; } } };
class Solution: def uncommonFromSentences(self, sentence1: str, sentence2: str) -> list[str]: wordCount = {} uncommonWords = [] # Process both sentences together self.processSentence(sentence1, wordCount) self.processSentence(sentence2, wordCount) # Find uncommon words for word, count in wordCount.items(): if count == 1: uncommonWords.append(word) return uncommonWords # Helper function to process a sentence and update word count def processSentence(self, sentence: str, wordCount: dict) -> None: for word in sentence.split(): wordCount[word] = wordCount.get(word, 0) + 1
Time Complexity
- Processing Sentences:
The
processSentence
function is called twice, once for each sentence. It splits each sentence into words and updates the word count in a hash map. This takes \(O(m)\) forsentence1
and \(O(n)\) forsentence2
, where \(m\) and \(n\) are the lengths of the sentences. - Finding Uncommon Words:
After processing both sentences, the loop iterates through all the words in the
wordCount
map, which takes \(O(w)\), where \(w\) is the total number of distinct words in both sentences. - Overall Time Complexity:
The overall time complexity is \(O(m + n)\), since the time is proportional to the combined lengths of the input sentences.
Space Complexity
- Space for Storing Words:
The
wordCount
map stores the count of each word in both sentences. In the worst case, all words are distinct, and the map stores \(O(w)\) words, where \(w\) is the total number of distinct words. - Space for the Output:
The
uncommonWords
vector stores the uncommon words, which could take up to \(O(w)\) space in the worst case (when all words are uncommon). - Overall Space Complexity:
The overall space complexity is \(O(w)\), where \(w\) is the total number of distinct words in the input sentences.
Approach 03:
-
C++
-
Python
#include <vector> #include <unordered_map> #include <sstream> #include <string> using namespace std; class Solution { public: vector<string> uncommonFromSentences(string sentence1, string sentence2) { vector<string> uncommonWords; unordered_map<string, int> wordCount; istringstream combinedStream(sentence1 + ' ' + sentence2); string word; // Count occurrences of each word in the combined sentences while (combinedStream >> word) { ++wordCount[word]; } // Collect words that appear exactly once for (const auto& [currentWord, frequency] : wordCount) { if (frequency == 1) { uncommonWords.push_back(currentWord); } } return uncommonWords; } };
import collections class Solution: def uncommonFromSentences(self, sentence1: str, sentence2: str) -> list[str]: # Count occurrences of all words in both sentences wordCount = collections.Counter((sentence1 + ' ' + sentence2).split()) # Return the list of words that appear only once return [word for word, frequency in wordCount.items() if frequency == 1]
Time Complexity
- Processing the Combined Sentence:
The combined sentence (which is a concatenation of
sentence1
andsentence2
with a space in between) is processed using a singleistringstream
stream. The time complexity of reading and counting words from the combined sentence is \(O(m + n)\), where \(m\) and \(n\) are the lengths ofsentence1
andsentence2
, respectively. - Collecting Uncommon Words:
Iterating through the
wordCount
map to find words that appear exactly once involves examining each word and its frequency. The time complexity of this operation is \(O(w)\), where \(w\) is the number of distinct words in the combined sentences. - Overall Time Complexity:
The overall time complexity is \(O(m + n + w)\). However, since \(w\) (the number of distinct words) is generally much smaller compared to \(m\) and \(n\), the complexity is effectively \(O(m + n)\).
Space Complexity
- Space for Word Count:
The
wordCount
map stores the count of each distinct word from the combined sentences. In the worst case, where every word is unique, the space required is \(O(w)\), where \(w\) is the total number of distinct words. - Space for the Output:
The
uncommonWords
vector stores the uncommon words. In the worst case, when all words are uncommon, this vector could also take \(O(w)\) space. - Overall Space Complexity:
The overall space complexity is \(O(w)\), where \(w\) is the total number of distinct words in the input sentences.