"read in csv file skip first n lines"
Hi,
when importing csv files, is there any method to skip lines in the beginning of a file?
Something that behaves like R's "read.table"-option "skip".
Quite some time I'd like to read in csv files that have a structure like windows' ini-files:
[Header]
some garbage here
some non-relevant info here
[Data]
Var1,Var2,Var3,...
0,1,0,...
....
I'd like to start reading the file in the line "Var1,Var2,.."
Just can't find any option to get this done :-(
Best
Josef
Best Answer
-
Telcontar120 RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn
@sgenzer So Martin posted a comment in the related product idea that I think showed me the solution. You can use the "Edit Annotations" parameter to manually specify that certain lines are to be treated as comments (even if they don't have the comment character, which is where I was getting hung up before). See the following:
That effectively skips those lines on import and it starts reading on the specified line, and interprets the correct number of columns! And at that point, the extra work with the macros to rename the attributes isn't needed either, since you can specify the relevant line to be read as names--but I'm sure going to save that process anyways because it could come in handy in other more complex data files in the future. Thanks again!
0
Answers
I think step 3 of 5 of the import wizard offers this as you can set lines as name, comment or unit but I haven't used it before.
Give that a try and see if it does what you expect.
Best,
JEdward.
Simon, can you pleeeeeeeeeeeease fix this? And in Excel importer too
I gave a quick test with a csv with the following format:
Line1- RandomText
Line2 - [Var1], [Var2] (column headings)
Line3 - RandomText
Line4 - RandomText
Line5 - RandomText
Line6+ - [data1], [data2] (data)
The below process worked fine for me. What have I missed?
(had already tried that before the original post).
This procedure however has two drawbacks:
1st: it gets quite unwieldy when 80 or 100 lines have
to be annotated manually.
2nd: It just doesn't work as intended:
Maybe youl'd like to try another file: When annotating the first two lines as comments and the 3rd line
with "Name", in the example above, the result set starts indeed with var1,
but the number of fields that are read in is still determined by the first line,
so this time the number of fields is determined by a comment
Perhaps I'm just using it incorrectly. Could you give any hint?
Best Josef
Point 1.- I agree that it's unwieldy. One way around this would be using Excel & concatenate formulae to build the number of lines that you need and then pasting it into the XML of the process.
For example::
A B C D
1 <parameter key=" 0 " value="Comment"/> =CONCATENATE(A1,B1,C1)
2 <parameter key=" =B1+1 " value="Comment"/> =CONCATENATE(A2,B2,C2)
3 <parameter key=" =B2+1 " value="Comment"/> =CONCATENATE(A3,B3,C3)
4 <parameter key=" =B3+1 " value="Comment"/> =CONCATENATE(A4,B4,C4)
Not the most elegant solution & having an option to type a range of rows in RapidMiner would be very nice.
Point 2. -Ah! I see the problem, that's a pain.
The below worked for me after I named the required columns in "data_set_meta_data_information". Try this.
This appears to be the only relevant thread from the forum using the obvious search terms.
In the meantime, if anyone else has found a more effective way to do this that doesn't involve going into csv files manually to add comment characters (or simply delete the header lines), I would love to hear about it. @sgenzer, any tricks up your sleeve here?
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
darn you, @Telcontar120 - now that you've thrown down the gauntlet I am going to tackle this. You know my weakness for bizarre ETL puzzles. Stay tuned.
Scott
how's this? test2.csv is attached
Brilliant, I really appreciate you working on this @sgenzer! Very creative use of macros to do this. This approach is promising, but I think it only works if there are enough delimiters in the first line so it reads the correct number of columns. If you modify the csv so the first line has only text (unfortunately more typical with title rows), then it only reads one column in, and the rest of the process works but only on that first column. And after a bit of testing it seems like it will generally read however many columns as it detects from the first row. So if you can figure out how to tell RapidMiner how many columns to read (not based on the first row of data), then something like this should do the trick.
Happy Thanksgiving!
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
oh darn - of course I work on the next one and then see your note. Leave it to @mschmitz to one-up me. Perfect solution.
Happy Thanksgiving.
Scott
I don't see how the macros can manage a blank row in the csv file. Looking to skip the first 2 lines (first is comment, second is blank) without editing the csv file. I didn't have luck with blank row via annotations, but perhaps I overlooked a setting. Any help with skipping blank lines between comment and column names?
Hi @matt_yauch,
you can manually say in the operator parameters to skip first x lines and just read it. RM will then automatically detect the att names etc.
BR,
Martin
Dortmund, Germany