1. use an integer Mark[i] to represent the character set for word[i] in the
dict; use lower 26 bits, each represent whether a certain character appears
in the word.
(Mark[i] & Mark[j]) == 0 means word[i] and
word[j] are disjoint.
2. at the mean time of calculating the Mark[i], we update the maximum-length
word index in
Longest[(~Mark[i]) & 0x03ffffff];
this means eg.
if word[i] = "abcdefg........uvwabcdefg.....uvw", Longest[(~
Mark[i]) & 0x03ffffff] = i :
the longest word contains and only contains {a,b,c,....,u,v,w} is
word[i] with length 46.
3. we modify the definition of Longest[mark] from
"longest word contains and only contains all unmarked
characters" to
"longest word contains at most all unmarked characters".
To modify the values for Longest[mark], we update Longest[ mark] using
all the Longest[ mark' ] values, where:
mark' = turn one 1 in mark into 0;
if we update the Longest[ mark] using "number of 1s in mark" as order,
increasingly, every Longest[mark] will only be recalculated once.
4. using queue to recalculate every "updated" Longest[mark], instead of
recalculate every Longest[mark] can sometimes dramatically reduce the
calculation. But complexity keeps the same.
5. For complexity: 2^26*26+O(n*m), which is not a good bit-O representation
but more useful.
6. I wrote it just practicing to explain something in English. It sucks to
me.