Norbert Orzechowicz - Consulting Services

Unlock the Power of Data with Expert Consulting Services

My consulting services include:


Comprehensive Presentation Before Full-Scale Consulting

Before diving into full-scale consulting, I offer a detailed presentation, usually lasting between
one to two hours, that provides a big-picture view of how to build a company-wide data mesh.
This presentation covers various options, helping you understand which solution best fits your needs.
It can also serve as a valuable resource for business analysts, business intelligence teams,
data scientists, and reporting teams to align on the data strategy and infrastructure, ensuring a cohesive approach across departments.


Workshops

Empower your team with hands-on learning experiences tailored to your organization's needs. My workshops focus on practical, real-world applications of data engineering, system architecture, and best practices in building scalable, efficient data solutions.

Key topics covered in the workshops include:

  • Data Processing and Automation Frameworks: Learn how to set up and use automated data pipelines using industry-standard tools and best practices.
  • Building a Company-Wide Data Mesh: Understand the principles, architecture, and operational aspects of implementing a robust data mesh tailored to your organization.
  • Scalable System Architecture: Gain insights into designing and optimizing systems to handle large-scale data processing efficiently.
  • Modern Data Engineering Tools: Hands-on sessions with tools like Apache Spark, Kafka, and cloud-based data processing services.
  • Custom Topics: Workshops can be customized to include your specific challenges, tools, or frameworks.

Workshops are designed to provide actionable knowledge and skills, enabling participants to immediately apply what they’ve learned to real-world problems. Depending on your needs, these sessions can include:

  • Instructor-led coding sessions
  • Breakout exercises for team collaboration
  • Problem-solving challenges specific to your domain
  • Interactive Q&A to address your unique concerns

Workshops are available in both remote and on-site formats, allowing flexibility in how your team participates and engages. Whether you’re looking to upskill your developers, align your team on a data strategy, or tackle a specific technical challenge, these workshops will equip your team with the knowledge and tools they need to succeed.

Interactive, Hands-On Workshops Tailored for Maximum Impact

Each workshop is designed to provide a highly engaging, practical experience that empowers developers to tackle real-world data challenges with confidence.
Each session is meticulously crafted to combine structured learning with hands-on tasks, ensuring every participant walks away with actionable skills.

Workshop Structure

1. Brief Introduction & Goal Setting (up to 15 minutes):

We start with a concise introduction outlining the objectives and providing the context needed to ensure every participant is aligned with the session's goals.

2. Hands-On Problem-Solving in Pairs or Small Groups:

Participants dive into practical tasks, starting with entry-level problems that build foundational understanding. Before each task, I provide clear explanations of the problem and its objectives, setting the stage for effective collaboration and focused learning.

3. Real-Time Guidance and Support:

As participants work through the tasks (ranging from 30 to 60 minutes each), I actively engage with the groups—jumping between rooms to answer questions, provide guidance, and ensure progress. This personalized approach guarantees that everyone receives the support they need.

4. Discussion, Feedback, and Iteration:

After each task, we come together to discuss solutions, address questions, and gather feedback. This collaborative review solidifies understanding and enhances learning outcomes before moving to the next challenge.

Example: Introduction to Data Processing

Here’s what a typical data processing introduction workshop might look like:

  • Introduction to iterating over large datasets: Learn strategies for handling datasets too large to fit into memory.
  • Data curation and cleaning: Techniques for removing inconsistencies and preparing data for analysis.
  • Break (coffee)
  • Schema management: Understand the importance of schemas and how to enforce them effectively.
  • Data type conversion: Work with flat and nested structures while mastering advanced transformations.
  • Break (lunch)
  • Data flattening: Simplify complex structures to make datasets usable.
  • Data joining: Combine datasets efficiently and correctly.
  • Break (coffee)
  • Data aggregation: Learn to group and summarize data for meaningful insights.
  • Data partitioning: Optimize datasets for performance and scalability.

Each task is accompanied by detailed explanations, clear objectives, and helpful tips, ensuring that even developers with minimal background knowledge can confidently participate and succeed.

Practical Focus, Tangible Outcomes

Every workshop is practice-first, theory-second. Participants actively apply what they learn in a supportive environment, with clear instructions and goals guiding them throughout the session. By the end of the workshop, your team will not only have mastered new concepts but also gained the confidence to apply these skills to real-world projects.

Ready to transform your team’s capabilities? Let’s discuss how a tailored workshop can address your unique needs and challenges..


Bootstrapping a Data Processing Framework

I provide a comprehensive service to set up a fully automated data processing infrastructure using PHP, Scala, Java, or Python. This includes:

  • Selecting and configuring the right tools and technologies
  • Preparing local and production environments
  • Setting up or adjusting CI/CD pipelines
  • Developing a codebase with a clear interface for developers, making it as simple as writing transformation jobs
  • Implementing monitoring and telemetry solutions

With this framework in place, developers only need to focus on writing the transformation code, registering it in the application, and testing it locally.

To provide an example, if a task involves moving orders from storage A to storage B within a given timeframe, the developer would:

  • Implement the JobInterface for data processing
  • Register the job
  • Define input parameters such as start-date and end-date
  • Write and test the job code locally, for example using the CLI command:
    $ ./data-mesh push:orders --start-date="2024-09-01" --end-date="2024-10-01

The framework provides developers, through the JobInterface, with:

  • Access to data storages
  • Secrets access
  • Input parameters/arguments
  • Telemetry
  • Error handling
  • Job chaining
  • Log collection

This framework lowers the entry barrier, enabling developers unfamiliar with data engineering tools to work with Apache Spark.
The approachable syntax eases onboarding, allowing smooth integration into a company-wide data mesh.

The most daunting aspect of automating data processing is often setting up the infrastructure and configuring tools.
My goal is to remove that complexity, delivering a well-defined, plug-and-play solution that empowers your team to get started quickly and confidently.

Book a free consulting session!

I'm available to hire as a consultant for your next project.
Book a free 30-minute consultation today to discuss your needs and how I can help you.

Note: All services, including consulting, workshops, and presentations, are available in both English and Polish to suit your needs.

Note: All my services, including consulting, workshops, and presentations, are highly flexible and can be tailored to address the specific challenges of your organization, team, or project. Additionally, they are available in both English and Polish to suit your needs.

About Me

Download

I'm a software engineer and architect with extensive experience in building high-scalability web applications and data processing systems.

For the last 16 years, I worked for various companies, from startups to large enterprises all over the world, helping them build and scale their systems.

Among all the projects I've worked on, one of the biggest was designing and implementing a dedicated logistic platform for one of the largest Amazon FBM sellers in the US.
That system was processing orders worth over $250M annually.

With project of this scale, collecting, processing and exposing data was crucial for the business to operate efficiently.
One of the most critical requirements was to design and implement a flexible and scalable business intelligence solution that would let analysts and business owners make data-driven decisions.

I'm also a maintainer of several open-source projects who enjoys automating and optimizing everything around.