I have been playing around with the Rhino ETL library and really like the direction that it's going. However I have found that the bad part about it is the documentation is sparse and there doesn't seem to be enough good information on how to create these pipeline processes and especially using the DSL.
I am just attempting to create a DSL file to load a file and export the data into another file to see how this all could fit together. What I have so far is this.
[DelimitedRecord("\t")]
class User:
public Name as string
public Phone as string
public Email as string
operation import_file:
file = Rhino.Etl.Core.Files.FluentFile(typeof(User)).From("""E:\Fake\Book1.txt""")
for row in file:
yield Row.FromObject(row)
operation export_file:
file = Rhino.Etl.Core.Files.FluentFile(typeof(User)).To("""E:\Fake\Test.txt""")
for row in rows:
record = User(Name: row["Name"])
file.Write(record)
process ImportFile:
import_file()
export_file()
It throws this exception
File.boo(1,2): BCE0064: Boo.Lang.Compiler.CompilerError: No attribute with the name 'DelimitedRecord' or 'DelimitedRecordAttribute' was found (attribute names are case insensitive). Did you mean 'System.Diagnostics.DelimitedListTraceListener' ?
If I remove the attribute part I get this error
Exception: Failed to create pipeline ImportFile: The class User must be marked with the [DelimitedRecord] or [FixedLengthRecord] Attribute. Exception: Failed to execute operation File.import_file: The class User must be marked with the [DelimitedRecord] or [FixedLengthRecord] Attribute.
Any ideas here? Or are there any examples of how to use the FluentFile within a DSL for Rhino ETL?