The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Article categorization from web
Hi!
My name is Andrea, i'm trying to to get "all the articles" in the "repubblica.it" home page, then i have to categorize them.
For the first part (access to the articles) i thought it was useful to access the content of <p> tags of the page (www.repubblica.it).
I mean, i chosed to use the operators Crawl Web and Enrich Data by Webservice (to access via XPATH to the meaning content). I setted the Enrich operator with an xpath query (attribute name=Article, query expression =//h:p), but i receive as output a file with the entire page (not the portion i need) as if the xpath query doesn't have effect. Did i choose them wrongly or anything else?
Can someone help me, please?
If possible, i'd like to post here the XML code of the project: can i?
Thanks,
Andrea
My name is Andrea, i'm trying to to get "all the articles" in the "repubblica.it" home page, then i have to categorize them.
For the first part (access to the articles) i thought it was useful to access the content of <p> tags of the page (www.repubblica.it).
I mean, i chosed to use the operators Crawl Web and Enrich Data by Webservice (to access via XPATH to the meaning content). I setted the Enrich operator with an xpath query (attribute name=Article, query expression =//h:p), but i receive as output a file with the entire page (not the portion i need) as if the xpath query doesn't have effect. Did i choose them wrongly or anything else?
Can someone help me, please?
If possible, i'd like to post here the XML code of the project: can i?
Thanks,
Andrea
0
Answers
i don't know what is wrong with your query, but i have experimented a little bit with the two extensions "Text Processing" and "Web Mining". I get the front page of your site and extract all links to the news articles, get these pages and extract the content. This isn't the perfect solution, but something to build upon.
Have fun with this
Marcin
Thanks! ;D
But my case has a difference beetween Neil's one: his data source is a database, while i need to get data from a file.
This is the problem: i've got a file with a text. I want to classify it in a category (like: sport, politic, economy...).
Here's what i produced until now. I'm sure i've got problems with "labels", as you can see from the error i get; but i think it's not my only problem ;D in fact, i don't know how to set up the training set.
Please, can someone help me?
the labels are indeed missing in your case. If you use the "Process Documents from Files" operator, each class name used for a directory to import files from is used as label for those files. This makes sense, if you put the documents from different categories into different directories and add all of them to the text directories parameter of the operator. But you are importing from only one directory, so there exists only one category. What should the classifier learn in this case? If there are no categories to distinguish, classifying doesn't make much sense.
You have to assign the categories as label/class to each document in your training set, otherwise there is nothing to learn. Using different directories for the classes is certainly the easiest way to achieve this.
Regards
Matthias
But where do i put the file with the text to categorize?
As you said i divided into folders the texts for the training. Now i want to use the system (trained) to categorize a text. I don't know where to put it.
Thanks in advance!
you can either import the files in the same way you use for training (if you have some pre-categorized test data) or import unknown documents from only one directory using any label you want for "Process Documents from Files". You have to assign the same preprocessing steps to the data as you did before training the model (the existing attributes have to be the same). Then you should be able to use "Apply Model" and get some predictions.
If the necessary steps aren't clear to you, consider providing your process XML to show the current progress.
Regards
Matthias
Thanks!
you have to add something like the operator chain I added to the bottom of your process. Hope this gives you some hints on how to classify new texts. Regards
Matthias
I'm taking from "www.repubblica.it" all the articles/link couples (thanks to Parostatek ).
Now what i need is to save each article in a format like this: "value_of_the_attribute_URL.txt" (in the file there should be the value of the "content" attribute.
I'd like to specify the output folder, where the process can create the files.
Here's what i've done right now: As you can see, now i'm creating ONE file called Articles.txt where i save all the articles. I need to fix this, as i explained above.
Really thanks for your help!
inside "Loop Examples" you could do something like the solution described here: http://rapid-i.com/rapidforum/index.php/topic,4055.0.html
You can extract the URL you want to use as file name in a similar way using macros. But you have to ensure that there aren't symbols inside the URL string that are invalid within filenames.
I think you might simplify the chain "Data to Documents" - "Loop Collection" - "Documents to Data" by using a single "Process Documents from Data" instead. This will not affect functionality but require two operators less.
Regards
Matthias