The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Removing StopWords using Dictionary
Hi
I am using my own dictionary to remove Stopwords. On close analysis, words like "is" are not being removed, although they are in the dictionary. Any clue as to why this is happening?
Thanks,
Hyram
I am using my own dictionary to remove Stopwords. On close analysis, words like "is" are not being removed, although they are in the dictionary. Any clue as to why this is happening?
Thanks,
Hyram
Tagged:
0
Best Answers
-
kayman Member Posts: 662 UnicornThe process flow seems correct at first glance, so just some additional questions
- How do you do word tokenization? If this one is incorrect you might still take full sentences as a token
- Do you transform to upper or lower case? since you are looking for 'is' I assume lower case
- Next you filter by length, as 'is' only contains 2 characters I assume you filter everything that's at least 2 characters. If not than 'is' should be stripped already here so again linked to how you do your word tokenization.
- How is your dictionary constructed? Every stopword on a new line without any spaces? As you are using the NLTK list it may contain additional characters that RM doesn't like to use.
You can also use the out of the box 'filter stopwords (english)', it's very similar to the NLTK one as far as I know.5 -
kayman Member Posts: 662 UnicornYeah, it's a bit tricky sometimes. A word as 'like' can have a big impact on for instance sentiment analysis, so I personally wouldn't consider it as a generic stopword.
What I typically do is combine both out of the box stop words and a personal addition.
Anyway, if it was gone when using the out of the box option, but remains with the NLTK doc there is indeed probably something with the format used and how it's read.
Easiest would be to just save it as a plain and simple txt file rather than a docx file, this way you're sure there is nothing missed or added5
Answers
Attached
For the dictionary, I am using NLTK stopswords. Not sure if my encoder setting is right?
1. I am using 'non-letters' to tokenise my words and it seems to work. No full sentences as a result;
2. Correct, I transform to lower case;
3. Correct - I filter by length of 2 i.e. any characters with < 2 are out
4. You have a good point as I have not checked this. I basically cut and pasted it into a Word doc
I initially used 'filter Stopwords (English)' but it was excluding words like 'like' which I wanted to keep.
Thanks!
Really appreciate your help! Will try what the operator notes suggest which is inline with what you are saying re txt format.
Your suggestion re file format worked. Thank you!