The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
[SOLVED] Visualizing text catagorisation model
Hi all,
I have a little question. For text classification I have tried different modeling techniques (Naïve Bayes, libSVM and K-NN). The performance is not really great, but I expect that is due to quality of the data and the overlap of the different categories (which is probably the reason why a decision tree is not working).
However to report on this I would like to visualize this by showing what words/elements are having a major influences on the model in its decision to allocate a text to a certain category. Maybe I am explaining this terribly (that might be reason why I haven't been able to find anything on this topic yet). But my question in layman's terms would be: How can I see what words "trigger" a certain category?
Thank you very much for your help!
I have a little question. For text classification I have tried different modeling techniques (Naïve Bayes, libSVM and K-NN). The performance is not really great, but I expect that is due to quality of the data and the overlap of the different categories (which is probably the reason why a decision tree is not working).
However to report on this I would like to visualize this by showing what words/elements are having a major influences on the model in its decision to allocate a text to a certain category. Maybe I am explaining this terribly (that might be reason why I haven't been able to find anything on this topic yet). But my question in layman's terms would be: How can I see what words "trigger" a certain category?
Thank you very much for your help!
0
Answers
for k-NN it is quite hard to interpret the model. For Naive Bayes you can connect its model output port to the process output and investigate the model. The Linear (!!!) SVM delivers a well-interpretable weights vector, which you can inspect either by looking at the model, or by connecting the weights output to the process output.
Best regards,
Marius