The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Why does the sentiment analysis classified a negative text as positive?
I run some sentiment analysis but it gave me wrong analysis in the sense that a text that supposes to be negative was classified as positive. What accounts for this misinterpretation and what can be done to rectify it. Again how does the sentiment analysis from Aylien classified a text to be negative, positive and neutral?
1
Answers
I don't know what are the algorithm(s) behind Aylien. Aylien is a 3th part extension, I think that, if you want more information about how this extension works, you will need to contact Aylien directly.
For your problem, have you tried other methods like :
1. In RapidMiner :
- Extract Sentiment operator from the Toolbox extension (to install from the MarketPlace)
2. Python libraries :
- TextBlob
- NLTK (Natural Language ToolKit)
Hope this helps,
Reagrds,
Lionel
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
How do I perform this? In RapidMiner :
- Extract Sentiment operator from the Toolbox extension (to install from the MarketPlace)
Can these two be added in rapidminer?
- TextBlob
- NLTK (Natural Language ToolKit)
Thank you for your time and forgive me for the number of questions I did
Here's quick tutorial videos on how to use operators (as part of extensions) and build the workflow in RapidMiner.
https://academy.rapidminer.com/learning-paths/get-started-with-rapidminer-and-machine-learning
Also, each operator comes with sample tutorial processess to learn more about how to use a specific operator.
Hope this helps!
Cheers,
Pavithra
When I conduct a sentiment analysis using a phrase which would be estimated as positive instead of that rapidminer characterized it as negative ?
So it will not recognize irony, sarcasm, tongue in cheek and all the other ways we humans are able to use nice words to make something sound bad or the other way around.
Apart from that the training data is very important, most if not all providers from commercial solutions trained their models on public and relatively generic data. So if your content does not match this well (because for instance you have a rather specific domain) your 'obvious giveaways' may not have been in the original data set and are therefore ignored for yours.
So this leaves you with the option to train your own, and if you have enough data to train you can get pretty good results (but never perfect, blame our language flexibility for that...)
I personally like the Vader fork for NLTK, gave me good results and was rather easy to implement