Career Highlights

Data Analyics Visualization

I've always worked in small to medium companies, often as the only data person or within a small team. This gave me the opportunity to build projects from scratch, taking them from initial concept to deployment. Many of these projects mirrored things I was already experimenting with in personal projects, allowing me to play around with solutions before bringing them into production.

These are some of the key projects that shaped my experience and what I learned from them.

Building a Complete Data Infrastructure

At much. GmbH, my current employer, I had the opportunity to build a full data and business intelligence architecture from the ground up, entirely with open-source tools. Unlike my previous roles, where I focused on individual projects, here I was responsible for designing a complete system, one that is used both for our internal needs and our clients. The result is a service and product that I'm deeply proud of and one that has been serving us exceptionally well.

  • Business Intelligence: My first task was to find the right open-source BI tool. I chose Apache Superset, which is now our internal dashboarding solution and a product we offer to clients.
  • Data Warehousing: Since the company's core business is implementing Odoo ERP for clients, my goal was to map and model the most important Odoo apps, like Sales, CRM, Finance, Projects, Timesheets, Recruitment, and more. I built a structured data warehouse using dbt (data build tool), creating a single source of truth that serves both our internal needs and client projects.
  • Data Automation: To keep everything running smoothly, we needed a data orchestrator. I implemented Dagster, which now manages all ELT pipelines we currently have.

With this setup, we now have a structured approach to every data request. Scheduling complex tasks is seamless, the effort spent modeling Odoo's database for ourselves is easily replicated for clients, and, finally, using the open-source Superset software ensures that BI is widely adopted across teams.

A Full-Stack Web App

This project was a bit of a side quest for me, different than my usual data work, but it turned out to be one of the most rewarding things I've built. Since a web app is a visual and contained project with well-defined goals, I could clearly see its impact as it took shape, which is why it's probably the one I'm most proud of. During the months I spent developing it, I learned more about web development than ever before.

For context, MaxiQuim, the company I was working for, sold market reports to B2B clients, delivering them as PDFs and web-based data through a small existing platform. We wanted to expand its features, but working with the original developers was slow, expensive, and frustrating. I proposed rebuilding the app in-house, basing my approach on a web-based Spotify Dashboard I had previously built as a hobby. Here's how the project was structured:

  • Django (Python) Backend: Using Django REST, we integrated an existing market data database into a new API while significantly expanding its scope. Django was key, as it provided an easy-to-use management panel for employees to add new data and modify what was published for clients.
  • React Frontend: Designed with modular components to display interactive charts, making it easy to add new reports as needed. We also built it as a Progressive Web App (PWA), a feature customers specifically requested to access reports like a native app.
  • Payment & Subscription System: Integrated with a payment provider and subscription manager, allowing users to subscribe in-app, browse available reports, and manage their purchases.
  • Deployment & Hosting: Developed with Docker and deployed on a VPS for stability and scalability.

I built the project alongside an intern, and more than a year after I left the company, they told me the platform was still running strong and had been expanded with even more features.

Using Machine Learning to Classify Global Trade Data

Still at MaxiQuim, a part of our work involved analyzing import and export data of plastics (HDPE, PET, etc.), particularly to track the activity of the biggest global producers. The dataset used for this was kinda obscure, never mentioned by any government source, and eventually disappeared after a crackdown on its publication.

This dataset provided only broad descriptions of shipping containers arriving in Brazil. By combining the know-how of where major producers were located with an understanding of the types of plastics they manufactured, we could match imports to specific companies. Since some of this classification had already been done manually in past reports, we set out to automate the process using machine learning.

  • Deep Learning for Classification: After testing multiple classic ML models, from linear regressions to SVC, I found that only certain deep learning approaches worked reliably, which were eventually created with Keras.
  • NLP & Text Processing: The project required heavy text analysis, so I deep-dived into NLP, vectorization, and text-based classification techniques.
  • Automated Pipelines: The final models, each trained to identify a specific type of plastic (HDPE, PVC, etc.), were deployed in our data pipelines, feeding directly into our reports.

The project took only a few months, but it was a fascinating challenge. Its results became a core part of nearly every report we sold.