Data Ingestion into Hadoop (ETL) and data processing using Pig, Hive and Spark
$250-750 USD
Πληρώθηκε κατά την παράδοση
Data has to be ingested into Hadoop environment using ETL (Innformatica, attuinity)
Data in HDFS has to be processed using Pig, Hive and Spark.
Project might be a variation on the above.
Ταυτότητα Εργασίας: #11674429
Σχετικά με την εργασία
17 freelancers κάνουν προσφορές κατά μέσο όρο $505 για αυτή τη δουλειά
I have 9 years of experience in IT with 2 years in Hadoop technologies Hive,PIg,Sqoop,Spark,Map Reduce.I have worked in Hadoop projects and POCs with different source systems like mainframes ,SQL server
More than 15 years of engineering experience using Java for object architecture , design and programming (j2se/j2ee/Struts/Spring), designing highly-available, scalable , multi-tenant web, XML/SOAP/REST endpoints , ser Περισσότερα
I have 4 year experience in Hadoop and have experience in end to end implementation of 4 hadoop project.
Hello, Could you explain a bit more your project, in order that I can tell if I'm able to help or not? For example , what kind of dtaa do you have in your ETL? Wating for your reply, Thanks, Nicolas
hii we have a 11 year of experience in big data, hadoop,spark and many more. sound experience on data processing.
Hi I have 3+ years of experience in dwh, python and informatica. Let me know the detail of the project.
I'm a big data hadoop | spark developer cum certified Instructor and have in total 3 years of experience have done many training projects which is similar to your project. so I guess I can easily do the following assig Περισσότερα
Hi, I have the required experience and I can help you. Please contact me for more details. Regards, Mohammad Alaa
i have very good experience in dataware housing project. i am sure you will be satisfy with my work. also, i am comfortable with hadoop ecosystem.
I can do the processing part, pullling the data into hdfs and processing it using hive/spark or any hadoop technology of your choice. Let me know how i can help you with.
I just have done a similar project for which I took data from HIVE tables in SPARK( in spark datasets), implemented some transformation on that data and stored the data back into HDFS, data was partitioned in HIVE o Περισσότερα