The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Interpreting LogLikelihood For LDA Topic Modeling
Hi RM Community,
Based on the attached picture, how should I interpret Loglikelihood values changing with number of topics. Is higher better or lower better. Does it needs to be squared to be positive?
Thanks!
0
Best Answer
-
MartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist- Sr. Director Data Solutions, Altair RapidMiner -
Dortmund, Germany1
Answers
Hi,
it's the negative LLH. The lower the better.
BR,
Martin
Dortmund, Germany
Thanks for prompt reply, so in this case -230000 is better than -240000 or vice versa?
Thanks so much Martin!
By the way, @svtorykh,
one of the next updates will have more performance measures for LDA. Just need to find time to implement it. LLH by itself is always tricky, because it naturally falls down for more topics.
BR,
Martin
Dortmund, Germany
That would be very nice to have! Please keep us posted Martin!
Hello. I want to find the optimal K-number for KMEANS with the LDA Loglikelihood value
For me, using alpha and beta as heuristics for the top 5 is the highest. Now, how to use K optimally. Does anyone know how to help? Thanks a lot I searched a lot, but I did not find anything:smileysad:
Hey @jozeftomas_2020,
i am fairly confused. KMeans and LDA are fairly different models. Why and how do you want to mix them?
~Martin
Dortmund, Germany
In the articles I have seen using the LDA to find optimal k, but I do not know how?
And how can I understand which LDA has a better result? Alpha and beta need to be adjusted a little or too high to get a better result?
I'm so sorry
Thanks a lot
@svtorykh,
i've added Perplexity as the default to the performance of LDA. Perplexity is defined as
exp(-LLH/tokens)
and should be minimized. That's somewhat what you see in common blog posts on LDA.
It's not yet on the marketplace. Let's see when we have enough features to publish.
Cheers,
Martin
Dortmund, Germany
Thanks much!
Always happy to help! Will it be possible that you present your use case at RM Wisdom in October?
Dortmund, Germany
May I ask how you generated the evaluation plot? Is there a specific operator for that or plotted it outside of RapidMiner?
Thanks!
/Aya
Dortmund, Germany
Yes, this works well. Thanks a lot!
/Aya