Senior Data Engineer




Rodrigo is a Data Professional with working experience designing and implementing data architectures & ML pipelines in Scala and Python. Due to his academic background, he has strong skills in mathematics, predictive modeling, and financial domain knowledge. He has a keen interest in functional programming, artificial intelligence, and distributed systems.

Currently working at PayClip as a senior data-engineer and at Western Institute of Technology and Higher Education as a machine-learning professor.


  • AI & ML
  • Blockchain
  • Fintech & Financial Engineering
  • Functional Programming
  • Distributed Systems


  • Executive Program on Business Management, 2019

    Tecnológico de Monterrey

  • BSc in Financial Engineering, 2018

    Instituto Tecnológico y de Estudios Superiores de Occidente



Senior Data Engineer


Sep 2019 – Present Guadalajara, Mexico
  • Data processing pipeline with Scala, Python, Airflow, and Apache Spark.
  • External data integration and business automation.
  • Contribute on R&D of new financial products.

Senior Data Engineer


Nov 2018 – Sep 2019 Guadalajara, Mexico
  • Created the data processing pipeline from scratch using Scala & Apache Spark.
  • Created a search interface based on Django and Elasticsearch.
  • Business data-modeling and data-quality assessments.

Big Data Consultant

Intersys Consulting

Mar 2018 – Oct 2018 Guadalajara, Mexico

Proactive member of the data-engineering team and Scala developer with a focus on functional programming, big data, and ML.

  • Contributed to the internal definition of the big-data carrer path.
  • Teaching scala and big-data tools (i.e. Spark) to internal consultants.
  • POCs using a big-data tech stack (Akka Streaming, Kafka, Cassandra, Spark).

Jr. Data Scientist


Feb 2017 – Oct 2018 Guadalajara, Mexico

The pricing model proposal at Crabi is known as UBI (usage-based insurance). This means that the insurance premium is calculated according to the user’s behavioral data (generated by telematics devices, mobile app, social network, etc.). The initial work of this model begins with generalized linear models and scales up to machine learning algorithms. Among my main responsibilities:

  • Contribute to the development of a risk-model based on behavioral data.
  • Real-time data processing platform using Spark, Kafka, and Cassandra.

Intern Data Scientist


Nov 2016 – Feb 2017 Guadalajara, Mexico

I began my internship at Crabi while it was an early-stage startup in R&D. My main responsibilities were:

  • Data analytics of IoT devices used to track test-users.
  • Automated business requirements with Python and R.
  • POC chatbot for FB-messenger.


Big Data Engineering with Spark

See certificate

Hadoop Platform and Application Framework

See certificate

Introduction to Big Data

See certificate