The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

JAPANESE Tokenizing

turutosiyaturutosiya Member Posts: 2 Contributor I
edited June 2019 in Help
Hi.

I'm a niewbie at RapidMiner.

I'm trying to mining some webpages with "GetPage", "Extract Content" And "Process Documents".
It seems work well for ENGLIUSH pages, but for JAPANESE pages, tokenizer doesn't work well,

Japanese tokenize is not supported?

Answers

  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    not really and as I'm not an expert on Japanese, I don't have a clue how we should do this, they don't have whitespaces, do they?
    How is determined where a word ends?

    Greetings,
      Sebastian
  • el_chiefel_chief Member Posts: 63 Contributor II
    you will probably want to tokenize using the regular expression mode, with the regular expression matching all characters. this should tokenize the document on every character, which i believe is what you want with japanese and chinese.

    you should also try the Text Processing > Transformation > Generate n-Grams (Characters) operator
  • karlrbkarlrb Member Posts: 4 Contributor I
    If I can be of any help, I would be happy to look into any specific questions on this subject.  My wife is Japanese and I'm in the process of learning Japanese - amazingly complex.

    Karl Bergerson
    Seattle WA USA
    karl.bergerson@gmail.com
  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi Karl,
    you are very welcome if you can come up with a good algorithm for japanese tokenization!

    With kind regards,
      Sebastian
  • turutosiyaturutosiya Member Posts: 2 Contributor I
    Hi All.

    It's beeeeeen a really long time to start this proj. at last, I have time to try.

    I'm looking for document which describing API spec for Tokenizer.
    does anyone know?

    I'm trying to implement a JapaneseTokenizer which work with morphological analysis engine, such as Chasen / Mecab.
Sign In or Register to comment.